What is it about?

Artificial intelligence (AI) is rapidly transforming how we detect disease, guide treatment decisions, and predict patient outcomes. However, when systems are trained on data that under-represent certain groups due to geography, gender, age, or socioeconomic status they can inadvertently favor some populations while performing poorly for others. In our survey, we present how bias can emerge at every stage of the machine learning pipeline: in pre-processing, where the representation or selection of patient data and the way features are defined may introduce disparities; in in-processing, where the algorithm’s mechanisms or optimization procedures during model training can amplify bias; and in post-processing, where the calibration, thresholding, and evaluation of predictions across demographic groups can perpetuate unfair outcomes. For each stage, we outline practical methods to identify and address bias, for instance, using counterfactual testing to determine whether changing only a patient’s demographic detail affects the model’s prediction, and applying techniques like fairness constraints to promote more equitable outcomes.

Featured Image

Why is it important?

Biased AI systems can have serious even life-threatening consequences, such as missing early signs of cancer in certain ethnic groups or under-triaging elderly patients in emergency settings. These failures not only harm individuals but also undermine public trust in AI-driven healthcare. Our paper highlights real-world examples of such disparities and offers practical guidance for researchers, clinicians, and regulators on how to prevent them. We emphasize the importance of detecting bias using techniques like fairness metrics and visualization methods, mitigating it through approaches such as re-weighting, adversarial debiasing, and threshold adjustments, and provide ethical and legal considerations to account for bias in patient populations. By following this roadmap, healthcare institutions and developers can build systems that equitably serve all patients, minimize harm, uphold ethical standards, and foster greater acceptance of AI innovations in medical practice.

Perspectives

This survey underscores the understanding that fairness in AI is not an optional feature, but a fundamental requirement that should be integrated into the design of AI systems. We hope this work provides meaningful guidance for AI researchers and healthcare professionals, fostering collaboration toward the development of equitable and trustworthy AI systems that improve healthcare outcomes for all populations.

Avash Palikhe
Florida International University

Read the Original

This page is a summary of: AI-driven healthcare: Fairness in AI healthcare: A survey, PLOS Digital Health, May 2025, PLOS,
DOI: 10.1371/journal.pdig.0000864.
You can read the full text:

Read
Open access logo

Contributors

The following have contributed to this page