What is it about?
Today’s large language models don’t just pass on information, they also shape how conversations look and feel. These AI systems prefer certain ways of speaking, and they often prioritize particular viewpoints and types of answers. As a result, some voices and experiences are made more visible, while others are softened, sidelined, or ignored. This matters for individuals and society because everyday communication - from search to work emails to customer service and even public debate - is increasingly filtered through these systems. We propose specific measures to ensure that AI tools support a wider range of viewpoints, perspectives, and values, instead of quietly steering us toward one dominant way of thinking.
Featured Image
Photo by Igor Omilaev on Unsplash
Why is it important?
What is new about our work is the claim that LLMs don’t just mirror online information; they actively shape which viewpoints and ways of framing become more visible in everyday communication and public discourse. We introduce the notion of communication bias and argue that current legal frameworks like the EU AI Act and DSA only partially address it, proposing concrete measures around system design, auditing, and competition to keep AI mediated communication more pluralistic.
Perspectives
Writing this article has been especially rewarding because it enabled us to collaborate across law and computer science and pushed us to think differently about how large language models shape communication, not just how they process data. This has deepened our understanding of and engagement with AI governance debates and opened up new interdisciplinary collaborations on how to preserve pluralism and democratic values in an AI mediated public sphere.
Adrian Kuenzler
University of Hong Kong
Read the Original
This page is a summary of: Communication Bias in Large Language Models: A Regulatory Perspective, Communications of the ACM, March 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3769689.
You can read the full text:
Contributors
The following have contributed to this page







