What is it about?

Deepfakes are videos that have been manipulated by AI to make people appear to say or do things they have never said or done. We evaluate how accurately people discern authentic videos from deepfakes. We compare the performance of 15,016 individuals to the performance of the leading AI model on 166 videos. We find ordinary humans and the leading AI are similarly accurate but make different kinds of mistakes.

Featured Image

Why is it important?

Our findings provide practical insights for how to design content moderation systems for flagging video-based misinformation: (1) humans are quite good at detecting visual manipulations of faces, (2) wisdom of the crowds is more accurate than individuals, (3) the combination of humans and AI is generally more accurate than either alone with a caveat that mis-predictions by the AI frequently lead people to make less accurate judgments, and (4) the leading AI model is prone to make unexpected errors on out-of-distribution samples.

Perspectives

The idea that deepfake manipulations can make people say things they have never said in a video makes a lot of people feel anxious and worried. This article presents evidence that ordinary people are quite good at detecting purely visual algorithmic manipulations of people's faces. Moreover, people can consider what they know about what an individual would say or do to evaluate the authenticity of a video. I hope these results offer optimism for the power of critical thinking in media consumption. People should not simply defer to AI decisions about whether a video has been manipulated or not because AI detection of deepfakes is far from perfect.

Matthew Groh
Massachusetts Institute of Technology

Read the Original

This page is a summary of: Deepfake detection by human crowds, machines, and machine-informed crowds, Proceedings of the National Academy of Sciences, December 2021, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2110013119.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page