What is it about?
Several instances in history confirm that misinformation and hate speech have been the main precursors for Mass atrocities, grave human rights violations, and even Genocides. The Holocaust, Rwandan, Cambodian, and Srebrenica genocides, and most recently, the Myanmar crises did not start with the mass killings but with dominant regimes implementing a series of misinformation and hate speech dissemination campaigns against the minority groups. Nowadays, the increasing adoption of social media through reshares, clicks, and likes is contributing to exacerbating this problem which has already claimed multiple instances of Hate crimes, and domestic terrorism. This study builds on the premise that misinformation and hate speech often work in tandem to fuel societal harm. It surveys existing research that illustrates the interconnected dynamics between these two phenomena, shedding light on their mutually reinforcing effects. Moreover, it explores recent advancements in Explainable AI methods, designed to help users understand the reasoning behind how AI systems classify content as either fake or hateful, thus ensuring adherence to free speech while providing understandable, transparent practical tools and ways to mitigate the occurrence of harmful and misinforming narratives in everyday conversations.
Featured Image
Photo by Hartono Creative Studio on Unsplash
Why is it important?
What makes your work unique is its pioneering exploration of the synergistic relationship between hate speech and fake news, a topic that has been addressed in isolation but rarely as an interconnected phenomenon from an AI and NLP perspective. The research stands out by presenting a comprehensive framework or "feedback loop", highlighting how these 2 social incivilities fuel each other in online spaces. By combining insights from natural language processing (NLP) and explainable AI, it not only identifies these harmful dynamics but also offers solutions for building systems that are transparent, accountable, and comprehensible to users.
Perspectives
As more tech giants, like X and Facebook, continue to remove or scale back fact-checking features from their platforms as of January 2025, this research becomes even more critical. It underscores the importance of developing alternative, robust systems to ensure safer online communities and access to reliable, vetted sources of information.
MIKEL Ngueajio
Howard University
Read the Original
This page is a summary of: Decoding Fake News and Hate Speech: A Survey of Explainable AI Techniques, ACM Computing Surveys, February 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3711123.
You can read the full text:
Contributors
The following have contributed to this page