What is it about?
Tools like ChatGPT and deepfake software, known as generative AI (GenAI), are being used to spread false or harmful information online. This study presents a new framework for understanding how people interact with misinformation and what can be done to prevent its spread. The study breaks down the problem into three parts: why people believe and share false information, how AI-generated content is created and spreads online, and what policies or regulations might help. It draws on expertise from computer science, social science, law, and public policy to provide a complete picture of the risks GenAI poses and the actions that governments, platforms, and individuals can take.
Featured Image
Photo by Manolo Chrétien on Unsplash
Why is it important?
This paper is among the first to provide an end-to-end framework for tackling misinformation in the age of generative AI. By integrating behavioral science, AI detection tools, and regulatory analysis, it identifies actionable strategies to strengthen digital resilience and trust. The iGyro project is also unique in its pan-Asian focus and interdisciplinary design, providing a much-needed blueprint for countries developing policies around AI and digital misinformation. The research contributes to building more informed societies and responsive policies as synthetic content becomes increasingly indistinguishable from real information.
Perspectives
As researchers working across disciplines—from computing and communication to law and public policy—we believe that addressing misinformation in the GenAI era requires more than technical fixes. It demands a deep understanding of why people consume and share content, and how platforms and policies shape those decisions. Through the iGyro project, we are building not just tools, but also partnerships and policy dialogues to make the internet safer and more trustworthy. Our goal is to ensure that as GenAI evolves, it is used to strengthen—not erode—public trust and civic resilience.
Dr. Kokil Jaidka
National University of Singapore
Read the Original
This page is a summary of: Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy, Digital Government Research and Practice, February 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3689372.
You can read the full text:
Contributors
The following have contributed to this page







