About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings

Research Article

Towards Retentive Proactive Defense Against DeepFakes

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-51399-2_8,
        author={Tao Jiang and Hongyi Yu and Wenjuan Meng and Peihan Qi},
        title={Towards Retentive Proactive Defense Against DeepFakes},
        proceedings={Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings},
        proceedings_a={TRIDENTCOM},
        year={2024},
        month={1},
        keywords={DeepFake Retentive Proactive defense Adversarial attack Perturbation},
        doi={10.1007/978-3-031-51399-2_8}
    }
    
  • Tao Jiang
    Hongyi Yu
    Wenjuan Meng
    Peihan Qi
    Year: 2024
    Towards Retentive Proactive Defense Against DeepFakes
    TRIDENTCOM
    Springer
    DOI: 10.1007/978-3-031-51399-2_8
Tao Jiang1, Hongyi Yu2, Wenjuan Meng3, Peihan Qi4,*
  • 1: School of Cyber Engineering
  • 2: Guangzhou Institute of Technology
  • 3: College of Information Engineering, Northwest A &F University
  • 4: State Key Laboratory of Integrated Service Networks
*Contact email: phqi@xidian.edu.cn

Abstract

In recent years, with the development of artificial intelligence, many facial manipulation methods based on deep neural networks have been developed, known as DeepFakes. Unfortunately, DeepFakes are always maliciously used, and if the spread of DeepFakes cannot be controlled in a timely manner, it will pose a certain threat to both society and individuals. Researchers have studied the detection of DeepFakes, but this type of detection belongs to post-evidence collection and still has a certain degree of negative impact. Therefore, we propose a retentive and proactive defense method to protect DeepFakes before malicious operations. The main idea is to train a perturbation generator end-to-end, and introduce the perturbation generated by the perturbation generator into the image to make it adversarial and immune to DeepFakes. White-box experiments on a typical DeepFake manipulation method (facial attribute editing) demonstrate the effectiveness of our proposed method, and a comparison with an adversarial attack PGD proves the superiority of our method in terms of similarity and inference efficiency.

Keywords
DeepFake Retentive Proactive defense Adversarial attack Perturbation
Published
2024-01-05
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-51399-2_8
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL