Meta's move to terminate its professional fact-checking initiative has drawn sharp criticism from technology and media circles. Experts caution that removing this oversight could undermine trust in digital platforms, particularly as these services increasingly rely on self-regulation driven by commercial interests.
Yet much of this discussion misses a critical development: AI language models are now extensively employed to craft news summaries, headlines, and engaging content, often outpacing traditional moderation systems. The concern extends beyond obvious misinformation or harmful material slipping through filters. A more subtle issue involves how these models select, frame, and emphasize information that appears accurate, thereby influencing public understanding.
Over time, these AI systems shape opinion formation by generating the responses delivered through chatbots, virtual assistants, and integrated news and social media platforms. They have become a primary channel for accessing information.
Research indicates that large language models do not merely transmit neutral facts. Their outputs can subtly prioritize certain viewpoints while downplaying others, often without users' awareness.
Understanding Communication Bias
In a forthcoming paper accepted by Communications of the ACM, computer scientist Stefan Schmid and I, a scholar in technology law and policy, demonstrate that large language models exhibit communication bias. Our findings reveal these models may emphasize specific perspectives while omitting or minimizing alternative views. This bias can affect user attitudes and beliefs, irrespective of the factual accuracy of the information provided.
Recent empirical studies using benchmark datasets have linked model outputs to political party positions around elections. These analyses show variations in how current models handle public content. Depending on the user persona or context provided in prompts, models can lean toward particular stances—even when maintaining factual correctness.
This points to an emerging characteristic: persona-based steerability, where a model adjusts its tone and emphasis to align with perceived user expectations. For example, when responding to a question about climate legislation, a model might highlight environmental benefits for a user identifying as an activist, while stressing regulatory costs for someone described as a business owner—both responses factually accurate but framed differently.
Such alignment can be mistaken for flattery, a phenomenon known as sycophancy, where models tell users what they want to hear. However, communication bias is more fundamental. It stems from disparities in who designs these systems, the datasets they use, and the incentives guiding their development. When a small group of developers dominates the market and their models consistently favor certain viewpoints, minor behavioral differences can amplify into significant distortions in public discourse.
Limits of Regulatory Approaches
As society increasingly depends on large language models for information access, governments have introduced policies like the EU's AI Act and Digital Services Act to address AI bias concerns. These measures promote transparency and accountability but are not tailored to tackle the nuanced issue of communication bias in AI outputs.
Advocates for AI regulation often aim for neutrality, yet true neutrality is frequently unachievable. AI systems inherently reflect biases in their data, training, and design, and regulatory efforts often end up substituting one form of bias for another.
Communication bias involves not just accuracy but also content generation and framing. Consider asking an AI about controversial legislation: the answer is influenced not only by facts but by how those facts are presented, which sources are emphasized, and the tone and perspective adopted.
This indicates that the bias problem's root lies not solely in biased training data or skewed outputs, but in the market structures that shape technology design. When information access is concentrated among a few large language models, communication bias risks intensify. Beyond regulation, effective mitigation requires ensuring competition, user-driven accountability, and regulatory openness to diverse approaches in building and offering these models.
Current regulations typically focus on banning harmful outputs after deployment or mandating pre-launch audits. While these measures can catch obvious errors, they may be less effective against subtle communication bias that emerges through user interactions.
Moving Beyond AI Regulation
While regulation can help address some AI biases, it often falls short in tackling deeper issues: the incentives driving the technologies that communicate information to the public.
Our research suggests that more enduring solutions involve fostering competition, transparency, and meaningful user participation. Empowering consumers to engage actively in how companies design, test, and deploy large language models is crucial.
These policies matter because AI will not only affect the information we access and the news we consume but also play a key role in shaping the future society we envision.