What is it about?

When we talk about “fairness” in artificial intelligence (AI) and machine learning (ML), most people focus on whether systems distribute opportunities and resources equally--for example, whether a hiring algorithm gives women and men the same chance at a job. However, as this paper argues, this view is too narrow. AI systems can also cause harm by reinforcing stereotypes and treating some groups as inferior. For instance, text and image generators may repeatedly portray doctors as men or criminals as people of color. This doesn’t take away opportunities or money directly, but it damages how groups are represented and how people relate to each other in society. To fully address these problems, we need to think about fairness in AI more broadly. I propose a new framework that combines two key ideas: "distributive equality" and "relational equality." By bringing these together, we can better understand what makes unfair systems harmful and how to design them in ways that challenge, rather than reproduce, structural inequalities. The paper also suggests practical steps for applying this approach throughout the AI development process.

Featured Image

Read the Original

This page is a summary of: What Is the Point of Equality in Machine Learning Fairness? Beyond Equality of Opportunity, ACM Journal on Responsible Computing, September 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3766539.
You can read the full text:

Read

Contributors

The following have contributed to this page