What is it about?

Video game labels such as “action”, ”replayable”, ”anime”, ”difficult”, or ”cozy” are useful for both organizing and discovering games. Different gamers might label the same games differently; to audit cultural-based biases in game annotations, we ran a large-scale survey of Xbox gamers in 16 countries and 9 languages. Differences in game labels were traced to two factors: cultural differences (quantified using Hofstede's dimensions) and linguistic differences (via matched bilingual speaker responses).

Featured Image

Why is it important?

This paper proposes methods to account for cultural differences in annotating labels, for example, showing that using more diverse annotators yields better predictive model performance. The paper also includes a step-by-step framework for auditing data labels and emphasizes the need for more qualitative evaluations of cultural differences.

Read the Original

This page is a summary of: Auditing Cross-Cultural Consistency of Human-Annotated Labels for Recommendation Systems, June 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3593013.3594098.
You can read the full text:

Read

Contributors

The following have contributed to this page