What is it about?
Generative AI tools like ChatGPT, Midjourney, and Canva are now creating much of the content we see online. But these tools often reproduce social stereotypes such as linking women with caregiving roles or portraying certain races in harmful ways. In our 2024 large-scale study, we asked these AI tools to generate over 1,100 images across different scenarios. We found that the AI consistently produced more stereotypical images than we would normally expect. We then ran five follow-up experiments with nearly 3,000 U.S. adults in 2024 and 2025. The results showed that when AI offers more stereotypical options, people are more likely to choose them not because they intend to, but because those images feel more familiar and easier to pick. Reducing the number of stereotypical options in the set helped people choose less stereotypically. However, simply warning people about stereotypes didn’t always work. When the stereotypes seemed “harmless” (for example, women shown as nurses), people actually picked them more often after the warning. They were more likely to correct themselves only when the stereotypes were clearly harmful (for example, portraying Black people as criminals). These findings suggest that the way AI shapes the choices we see can quietly reinforce stereotypes and that tackling this problem requires more than just raising awareness.
Featured Image
Photo by Markus Spiske on Unsplash
Why is it important?
This study shows that today’s generative AI tools can copy and even strengthen social stereotypes. They tend to produce more stereotypical content, which makes people more likely to choose or repeat it. Limiting this kind of AI-generated content can help reduce the spread of stereotypes, but simply telling people about the problem doesn’t always change their behavior, especially when the stereotypes seem harmless. These results highlight the need to make AI systems more fair and responsible so they don’t reinforce negative social biases.
Perspectives
Writing this paper has been one of the most rewarding experiences. I feel incredibly fortunate to have worked alongside my two brilliant co-authors, Professor Lan Xia and Wenting Zhong. This work is not only about understanding the influence of AI on stereotypes. It’s also a testament to what a great team can achieve together. I’m deeply grateful to them both for making the process as meaningful as the results themselves.
Fei Gao
Bentley University
Read the Original
This page is a summary of: Stereotypes in artificial intelligence-generated content: Impact on content choice., Journal of Experimental Psychology Applied, September 2025, American Psychological Association (APA),
DOI: 10.1037/xap0000548.
You can read the full text:
Contributors
The following have contributed to this page







