About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Digital Forensics and Cyber Crime. 14th EAI International Conference, ICDF2C 2023, New York City, NY, USA, November 30, 2023, Proceedings, Part I

Research Article

Backdoor Learning on Siamese Networks Using Physical Triggers: FaceNet as a Case Study

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-56580-9_17,
        author={Zeshan Pang and Yuyuan Sun and Shasha Guo and Yuliang Lu},
        title={Backdoor Learning on Siamese Networks Using Physical Triggers: FaceNet as a Case Study},
        proceedings={Digital Forensics and Cyber Crime. 14th EAI International Conference, ICDF2C 2023, New York City, NY, USA, November 30, 2023, Proceedings, Part I},
        proceedings_a={ICDF2C},
        year={2024},
        month={4},
        keywords={Backdoor learning Physical trigger Multi-task learning Siamese networks FaceNet},
        doi={10.1007/978-3-031-56580-9_17}
    }
    
  • Zeshan Pang
    Yuyuan Sun
    Shasha Guo
    Yuliang Lu
    Year: 2024
    Backdoor Learning on Siamese Networks Using Physical Triggers: FaceNet as a Case Study
    ICDF2C
    Springer
    DOI: 10.1007/978-3-031-56580-9_17
Zeshan Pang1, Yuyuan Sun1, Shasha Guo1,*, Yuliang Lu1
  • 1: College of Electronic Engineering
*Contact email: guoshasha13@nudt.edu.cn

Abstract

Deep learning models play an important role in many real-world applications, for example, in face recognition systems, Siamese networks have been widely used. Their security issues have attracted increasing attention and backdoor learning is an emerging research area that studies the security of deep learning models. However, few backdoor learning focuses on Siamese models. To address the problem, this paper proposes a backdoor learning method on Siamese networks using physical triggers. Inspired by multi-task learning, after poisoning the dataset, the pre-trained Siamese network is fine-tuned at the last linear layer with the guidance of two tasks: outputting correct embeddings of benign samples and reacting to the poison samples. The outputs of the two tasks are then added and normalized as the output of the model. Experiments show that using the typical Siamese network FaceNet as the target network, the attack success rate of our method reaches 99%, while the model accuracy on the benign dataset decreases by only 0.001%, which reveals the model security issue.

Keywords
Backdoor learning Physical trigger Multi-task learning Siamese networks FaceNet
Published
2024-04-03
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-56580-9_17
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL