What is it about?
Artificial intelligence is increasingly used to support decisions in areas such as hiring, healthcare, finance, education, and criminal justice. However, AI systems can unintentionally reproduce or amplify social biases present in data or design choices. While many studies focus on technical solutions, the role of humans in identifying and mitigating bias is equally important. This paper presents a systematic review of 100 research papers published between 2018 and 2024 that examine how humans contribute to addressing bias in AI systems. We analyze the different roles humans play throughout the AI lifecycle, including developers, domain experts, auditors, and end users. The review also examines the methods researchers use to study bias, such as user studies, design frameworks, and evaluation approaches. By organizing and synthesizing existing research, this work provides a comprehensive overview of how human involvement supports the detection, understanding, and mitigation of bias in AI systems.
Featured Image
Photo by Igor Omilaev on Unsplash
Why is it important?
AI systems increasingly influence decisions that affect people’s lives. If bias is not properly addressed, these systems can produce unfair outcomes and reinforce existing inequalities. Although algorithmic techniques for fairness are widely studied, addressing bias requires more than technical solutions alone. This review highlights the critical role humans play in identifying biases, interpreting AI behavior, and designing interventions to mitigate unfair outcomes. By mapping current research on human roles, bias mitigation strategies, and research methods, the paper provides guidance for researchers and practitioners who want to build more fair, accountable, and transparent AI systems. The insights from this review help advance research at the intersection of human-computer interaction and responsible AI, supporting the development of AI systems that better serve diverse communities.
Read the Original
This page is a summary of: A Systematic Review on Human Roles, Solutions, and Methodological Approaches to Address Bias in AI, ACM Computing Surveys, February 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3793667.
You can read the full text:
Contributors
The following have contributed to this page







