What is it about?
DeepFakes are a new genre of synthetic videos, in which a subject’s face is modified into a target face in order to simulate the target subject in a certain context and create convincingly realistic footage of events that never occurred. Effective measures should be developed for fighting against such DeepFakes to protect our personal security and privacy. In this work, we propose a proactive framework for combating DeepFake before data manipulations.
Featured Image
Photo by Markus Spiske on Unsplash
Why is it important?
We propose a new method for defending DeepFake proactively, from the perspective of adversarial searching in latent face space. Our method embeds adversarial information into the latent code and therefore can produce high visual quality face images and is more difficult to be detected.
Perspectives
To defeat unknown DeepFake models is still challenging. Though there is still a long journey to design effective and robust defense methods against DeepFake, I provide novel insights for the community and a new choice for multimedia security. I hope you find this article thought-provoking.
Ziwen He
casia
Read the Original
This page is a summary of: Defeating DeepFakes via Adversarial Visual Reconstruction, October 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3503161.3547923.
You can read the full text:
Contributors
The following have contributed to this page