What is it about?

Red teaming is a robust approach to ensure AI safety and AI security. Our study looks into how red teaming is practiced at workplaces. First, our paper offers a comprehensive definition of what should be involved and considered as red teaming for generative AI. Second, we identified the managerial issues that red teaming managers or red teamers should be aware of.

Featured Image

Why is it important?

We reveal how internal resistance and hierarchy often stop red teamers from uncovering real AI harms. Generative AI safety depends not just on technology, but on transforming how organizations work and value ethical testing.

Perspectives

This project changed how I see AI safety. I started out thinking red teaming was a technical task—but through our interviews, I realized it’s deeply human, emotional, and political. I hope this paper gives voice to the invisible labor and moral courage of those working behind the scenes to make AI safer for everyone.

Bixuan Ren
Syracuse University

Read the Original

This page is a summary of: Organization Matters: A Qualitative Study of Organizational Dynamics in Red Teaming Practices For Generative AI, Proceedings of the ACM on Human-Computer Interaction, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3757641.
You can read the full text:

Read

Contributors

The following have contributed to this page