What is it about?

This work is about Deefake audio and video detection. The proposed method exploits multiple models to confidently predict the authenticity of video whether it is real or fake. The proposed method uses a majority voting strategy for final prediction.

Featured Image

Why is it important?

Due to the rise in manipulated and fake videos, we need automated methods to timely detect and spot deepfake videos and avoid their spread on social media. Deepfakes can be used in many forms, among which spreading false political propaganda, generating fake adult videos, synthesizing Deepfake calls, generating fake news, stealing identities for financial gain, and slandering others have recently become common.

Perspectives

The audio-visual or multimodal aspect of Deepfake videos/content is less explored, it needs more attention from the Multimedia research forensics community to collaborate and propose new methods to address this challenging issue.

Adil Shahzad
Academia Sinica

Read the Original

This page is a summary of: Multimodal Forgery Detection Using Ensemble Learning, November 2022, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.23919/apsipaasc55919.2022.9980255.
You can read the full text:

Read

Contributors

The following have contributed to this page