What is it about?

Researchers studying AI fairness have attempted to apply the critical framework of intersectionality to their work; we argue that they often do so in an incomplete or incorrect manner. We look at how AI fairness researchers engage with intersectionality by reviewing 30 relevant scientific articles, and discover gaps between how they use it and the purpose of intersectionality, i.e., identifying power relations in society and how they shape inequality. For example, AI researchers often narrowly interpret intersectionality to only be about the intersections of different demographic groups, and neglect societal and historical context. We provide actionable recommendations for AI researchers to better engage with intersectionality.

Featured Image

Why is it important?

Getting AI fairness right is important for everyone whose lives are touched by AI. Using intersectionality as a lens for AI fairness work enriches it by encouraging us to pay attention to social and historical context, power relations in society, and how our technical decisions with respect to AI are related to AI's impact on people in the world. It is thus important to carefully interpret intersectionality for AI fairness work, by engaging with core intersectionality scholarship and not diluting it.

Perspectives

Computer scientists hate when I say that we should try to build AI systems that are smaller, tightly-scoped, and iteratively built and maintained with community feedback. This is because our discipline often encourages us to build universally, neutrally and quantifiably. But AI has a lot of impact on people's lives, and communities and contexts vary widely. We know that quantitative and legalistic definitions of fairness let a lot of people fall through the cracks, so I want a world where we replace solely quantitative measures of fairness with more expansive, people-centred ones - definitions that might look more like gathering people together for designing, iteratively developing and evaluating if, what, where, why and how any tech solution should be used and built.

Vagrant Gautam
Universitat des Saarlandes

Read the Original

This page is a summary of: Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, August 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3600211.3604705.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page