What is it about?

Ranking, recommendation, and retrieval systems are widely used in online platforms and other societal systems, including e-commerce, media-streaming, admissions, gig platforms, and hiring. In the recent past, a large “fair ranking” research literature has been developed around making these systems fair to the individuals, providers, or content that are being ranked. Most of this literature defines fairness for a single instance of retrieval, or as a simple additive notion for multiple instances of retrievals over time. This work provides a critical overview of this literature, detailing the often context-specific concerns that such approaches miss: the gap between high ranking placements and true provider utility, spillovers and compounding effects over time, induced strategic incentives, and the effect of statistical uncertainty. We then provide a path forward for a more holistic and impact-oriented fair ranking research agenda, including methodological lessons from other fields and the role of the broader stakeholder community in overcoming data bottlenecks and designing effective regulatory environments.

Featured Image

Why is it important?

Most proposed fair ranking methods assume laboratory conditions and do not consider "real-world" rankings. However, rankings are always embedded into systems, and visibility of an item does not directly translate into utility. We outline current research gaps and describe possible ways to close them.

Read the Original

This page is a summary of: Fair ranking: a critical review, challenges, and future directions, June 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3531146.3533238.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page