What is it about?

The idea behind this work was that no single individual should be responsible for removing bias from a dataset. It builds on previous work, D-Bias, which allows users to modify the underlying causal network of a dataset. FairPlay extends this concept by creating a collaborative environment where multiple stakeholders can propose modifications and work toward a consensus on the final underlying causal structure.

Featured Image

Why is it important?

Without this tool, there wouldn’t be a structured way for a group to engage in these discussions meaningfully. Our user studies showed that even non-technical participants—our primary audience—could intuitively interact with the tool, making the process both accessible and engaging. By turning what is often an opaque, technical task into a more transparent and participatory one, FairPlay makes fairness discussions more inclusive and actionable.

Read the Original

This page is a summary of: FairPlay: A Collaborative Approach to Mitigate Bias in Datasets for Improved AI Fairness, Proceedings of the ACM on Human-Computer Interaction, May 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3710982.
You can read the full text:

Read

Contributors

The following have contributed to this page