What is it about?
Deepfakes are videos that have been manipulated by AI to make people appear to say or do things they have never said or done. We evaluate how accurately people discern authentic videos from deepfakes. We compare the performance of 15,016 individuals to the performance of the leading AI model on 166 videos. We find ordinary humans and the leading AI are similarly accurate but make different kinds of mistakes.
Photo by Christian Gertenbach on Unsplash
Why is it important?
Our findings provide practical insights for how to design content moderation systems for flagging video-based misinformation: (1) humans are quite good at detecting visual manipulations of faces, (2) wisdom of the crowds is more accurate than individuals, (3) the combination of humans and AI is generally more accurate than either alone with a caveat that mis-predictions by the AI frequently lead people to make less accurate judgments, and (4) the leading AI model is prone to make unexpected errors on out-of-distribution samples.
Read the Original
This page is a summary of: Deepfake detection by human crowds, machines, and machine-informed crowds, Proceedings of the National Academy of Sciences, December 2021, Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.2110013119.
You can read the full text:
The following have contributed to this page