What is it about?
This article explores how artificial intelligence systems influence public debates and decision-making. Instead of being neutral tools, AI acts like a participant in arguments by deciding which voices are amplified and which are silenced. We show how algorithms in social media, law, and education can unintentionally reinforce bias, making some groups less visible or credible. The study also suggests ways to design fairer systems that promote diversity and justice in digital discussions.
Featured Image
Photo by Mohamed Nohassi on Unsplash
Why is it important?
Our findings highlight that AI is not just a technical tool but an active shaper of public reasoning. By privileging dominant voices and sidelining marginalized perspectives, AI risks undermining fairness in democratic dialogue, legal systems, and education. Recognizing AI as a “co-arguer” opens new pathways for designing systems that foster inclusivity, accountability, and epistemic justice.
Perspectives
Writing this article was a rewarding experience, as it allowed me to connect argumentation theory with pressing ethical challenges in AI. I hope it sparks conversations about how technology can be designed to support not silence diverse voices in society.
MD FOYSAL AHMED
Southwest University of Science and Technology
Read the Original
This page is a summary of: Algorithmic co-arguers: AI-mediated argumentation and structural epistemic injustice, AI and Ethics, March 2026, Springer Science + Business Media,
DOI: 10.1007/s43681-026-01078-3.
You can read the full text:
Resources
Contributors
The following have contributed to this page







