What is it about?

The paper is about the process of attacking sequence-based deepfake detection models (Conv-LSTM and Facenet-LSTM) with adversarial attacks (FGSM and CW-L2) to craft adversarially perturbed deepfake video frames. We have successfully shown that the sequence-based deepfake video detectors can be tricked with adversarial examples.

Featured Image

Why is it important?

Prior works suggest that the sequence-based deepfake detector models are better performing than CNN-based models and are more likely to be deployed in the real world. Our experimental results strongly suggest that the sequence-based deepfake detectors are still vulnerable to adversarial attacks, both in the white-box and black-box setup. We would like to highlight the importance of developing more robust sequence-based deepfake detectors and opening up directions for future research.

Read the Original

This page is a summary of: Evaluating Robustness of Sequence-based Deepfake Detector Models by Adversarial Perturbation, May 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3494109.3527194.
You can read the full text:

Read

Contributors

The following have contributed to this page