What is it about?

Biases in AI chatbots like ChatGPT are often seen as just flaws. We argue that these biases actually reflect the cultural context of the data they were trained on, creating new 'AI cultures'. We show how these biases are part of a network of interactions between humans and AI, meaning they can't simply be taken out, but must be understood in practice. This perspective suggests that understanding and managing these cultural influences is crucial for better AI integration in organizations.

Featured Image

Read the Original

This page is a summary of: Beyond bias: ethnographic approaches to “AI culture”, Journal of Organizational Ethnography, April 2026, Emerald,
DOI: 10.1108/joe-07-2024-0035.
You can read the full text:

Read

Contributors

The following have contributed to this page