What is it about?

Deep learning algorithms enable the creation of audiovisual materials that resemble actual audiovisual recordings, but the depicted incidents never actually happened (deepfakes). This development could benefit political actors who may falsely claim that unfavorable, genuine audiovisual content was created by artificial intelligence (the liar’s dividend). The results of our experiment show that discrediting genuine audiovisual evidence as fake news may indeed be a successful strategy for political actors, a strategy that healthy democracies need to be aware of.

Featured Image

Read the Original

This page is a summary of: Deepfake! A liar’s dividend for audiovisual material., Psychology of Popular Media, March 2026, American Psychological Association (APA),
DOI: 10.1037/ppm0000665.
You can read the full text:

Read

Contributors

The following have contributed to this page