About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part II

Research Article

Do Backdoors Assist Membership Inference Attacks?

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-64954-7_13,
        author={Yumeki Goto and Nami Ashizawa and Toshiki Shibahara and Naoto Yanai},
        title={Do Backdoors Assist Membership Inference Attacks?},
        proceedings={Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part II},
        proceedings_a={SECURECOMM PART 2},
        year={2024},
        month={10},
        keywords={Backdoor-assisted membership inference attack backdoor attack poisoning attack membership inference attack},
        doi={10.1007/978-3-031-64954-7_13}
    }
    
  • Yumeki Goto
    Nami Ashizawa
    Toshiki Shibahara
    Naoto Yanai
    Year: 2024
    Do Backdoors Assist Membership Inference Attacks?
    SECURECOMM PART 2
    Springer
    DOI: 10.1007/978-3-031-64954-7_13
Yumeki Goto1,*, Nami Ashizawa2, Toshiki Shibahara2, Naoto Yanai1
  • 1: Osaka University, 1-5 Yamadaoka, Suita-shi
  • 2: NTT Social Informatics Laboratories, 3-9-11 Midori-cho, Musashino-shi
*Contact email: y-goto@ist.osaka-u.ac.jp

Abstract

When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier. However, the attacks can be detected because inference accuracy deteriorates due to poison samples. In this paper, we discuss abackdoor-assisted membership inference attack, a novel membership inference attack based on backdoors that return the adversary’s expected output for a triggered sample. We found three key insights through experiments with an academic benchmark dataset. We first demonstrate that the backdoor-assisted membership inference attack is unsuccessful when backdoors are trivially used. Second, when we analyzed latent representations to understand the unsuccessful results, we found that backdoor attacks make any clean sample an inlier in contrast to poisoning attacks which make it an outlier. Finally, our promising results also show that backdoor-assisted membership inference attacks may still be possible only when backdoors whose triggers are imperceptible are used in some specific setting.

Keywords
Backdoor-assisted membership inference attack backdoor attack poisoning attack membership inference attack
Published
2024-10-15
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-64954-7_13
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL