What is it about?

In this work, we analyze human judgments of self-presentations written by humans and generated by AI systems. We find that people cannot detect AI-generated self-presentations as their judgment is misguided by intuitive but flawed heuristics for AI-generated language. We demonstrate that AI systems can exploit these heuristics to produce text perceived as “more human than human.”

Featured Image

Why is it important?

Human communication is now rife with language generated by AI. Every day, across the web, chat, email, and social media, AI systems produce billions of messages that could be perceived as created by humans. Our results raise the question of how humanity will adapt to AI-generated text, illustrating the need to reorient the development of AI language systems to ensure that they support rather than undermine human cognition.

Read the Original

This page is a summary of: Human heuristics for AI-generated language are flawed, Proceedings of the National Academy of Sciences, March 2023, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2208839120.
You can read the full text:

Read

Contributors

The following have contributed to this page