
Research Article
Do Backdoors Assist Membership Inference Attacks?
@INPROCEEDINGS{10.1007/978-3-031-64954-7_13, author={Yumeki Goto and Nami Ashizawa and Toshiki Shibahara and Naoto Yanai}, title={Do Backdoors Assist Membership Inference Attacks?}, proceedings={Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part II}, proceedings_a={SECURECOMM PART 2}, year={2024}, month={10}, keywords={Backdoor-assisted membership inference attack backdoor attack poisoning attack membership inference attack}, doi={10.1007/978-3-031-64954-7_13} }
- Yumeki Goto
Nami Ashizawa
Toshiki Shibahara
Naoto Yanai
Year: 2024
Do Backdoors Assist Membership Inference Attacks?
SECURECOMM PART 2
Springer
DOI: 10.1007/978-3-031-64954-7_13
Abstract
When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier. However, the attacks can be detected because inference accuracy deteriorates due to poison samples. In this paper, we discuss abackdoor-assisted membership inference attack, a novel membership inference attack based on backdoors that return the adversary’s expected output for a triggered sample. We found three key insights through experiments with an academic benchmark dataset. We first demonstrate that the backdoor-assisted membership inference attack is unsuccessful when backdoors are trivially used. Second, when we analyzed latent representations to understand the unsuccessful results, we found that backdoor attacks make any clean sample an inlier in contrast to poisoning attacks which make it an outlier. Finally, our promising results also show that backdoor-assisted membership inference attacks may still be possible only when backdoors whose triggers are imperceptible are used in some specific setting.