
Research Article
MIA-Leak: Exploring Membership Inference Attacks in Federated Learning Systems
@INPROCEEDINGS{10.1007/978-3-031-31420-9_9, author={Chengcheng Zhu and Jiale Zhang and Xiang Cheng and Weitong Chen and Xiaobing Sun}, title={MIA-Leak: Exploring Membership Inference Attacks in Federated Learning Systems}, proceedings={Blockchain Technology and Emerging Technologies. Second EAI International Conference, BlockTEA 2022, Virtual Event, November 21-22, 2022, Proceedings}, proceedings_a={BLOCKTEA}, year={2023}, month={4}, keywords={Federated learning Membership inference Generative adversarial nets Privacy leakage}, doi={10.1007/978-3-031-31420-9_9} }
- Chengcheng Zhu
Jiale Zhang
Xiang Cheng
Weitong Chen
Xiaobing Sun
Year: 2023
MIA-Leak: Exploring Membership Inference Attacks in Federated Learning Systems
BLOCKTEA
Springer
DOI: 10.1007/978-3-031-31420-9_9
Abstract
Federated learning has achieved significant success in both academia and industry scenarios since it can train a joint model among unbalanced datasets while protecting the training data privacy. Recent research has shown that, by inferring whether a given data record belongs to the model’s training dataset, the membership information could be leaked by malicious participants. However, when deploying member inference attacks in federated learning, the core problem is how to obtain the membership inference attack data with the same distribution as the training data. In this paper, to tackle this problem, we mainly focus on exploring membership inference attacks in federated learning based on the data augmentation method. Specifically, we present two types of membership inference attacks based on the generative adversarial nets, in which a class-level attack aims to infer the global model and a user-level attack tries to focus on a specific victim. We conduct extensive experiments to evaluate the effectiveness of our proposed two types of membership inference attacks on two benchmark datasets. The experimental results have shown that both class-level and user-level attacks can achieve extraordinary attack accuracy on federated learning.