
Research Article
Towards Retentive Proactive Defense Against DeepFakes
@INPROCEEDINGS{10.1007/978-3-031-51399-2_8, author={Tao Jiang and Hongyi Yu and Wenjuan Meng and Peihan Qi}, title={Towards Retentive Proactive Defense Against DeepFakes}, proceedings={Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings}, proceedings_a={TRIDENTCOM}, year={2024}, month={1}, keywords={DeepFake Retentive Proactive defense Adversarial attack Perturbation}, doi={10.1007/978-3-031-51399-2_8} }
- Tao Jiang
Hongyi Yu
Wenjuan Meng
Peihan Qi
Year: 2024
Towards Retentive Proactive Defense Against DeepFakes
TRIDENTCOM
Springer
DOI: 10.1007/978-3-031-51399-2_8
Abstract
In recent years, with the development of artificial intelligence, many facial manipulation methods based on deep neural networks have been developed, known as DeepFakes. Unfortunately, DeepFakes are always maliciously used, and if the spread of DeepFakes cannot be controlled in a timely manner, it will pose a certain threat to both society and individuals. Researchers have studied the detection of DeepFakes, but this type of detection belongs to post-evidence collection and still has a certain degree of negative impact. Therefore, we propose a retentive and proactive defense method to protect DeepFakes before malicious operations. The main idea is to train a perturbation generator end-to-end, and introduce the perturbation generated by the perturbation generator into the image to make it adversarial and immune to DeepFakes. White-box experiments on a typical DeepFake manipulation method (facial attribute editing) demonstrate the effectiveness of our proposed method, and a comparison with an adversarial attack PGD proves the superiority of our method in terms of similarity and inference efficiency.