About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Blockchain Technology and Emerging Technologies. Second EAI International Conference, BlockTEA 2022, Virtual Event, November 21-22, 2022, Proceedings

Research Article

MIA-Leak: Exploring Membership Inference Attacks in Federated Learning Systems

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-31420-9_9,
        author={Chengcheng Zhu and Jiale Zhang and Xiang Cheng and Weitong Chen and Xiaobing Sun},
        title={MIA-Leak: Exploring Membership Inference Attacks in Federated Learning Systems},
        proceedings={Blockchain Technology and Emerging Technologies. Second EAI International Conference, BlockTEA 2022, Virtual Event, November 21-22, 2022, Proceedings},
        proceedings_a={BLOCKTEA},
        year={2023},
        month={4},
        keywords={Federated learning Membership inference Generative adversarial nets Privacy leakage},
        doi={10.1007/978-3-031-31420-9_9}
    }
    
  • Chengcheng Zhu
    Jiale Zhang
    Xiang Cheng
    Weitong Chen
    Xiaobing Sun
    Year: 2023
    MIA-Leak: Exploring Membership Inference Attacks in Federated Learning Systems
    BLOCKTEA
    Springer
    DOI: 10.1007/978-3-031-31420-9_9
Chengcheng Zhu1, Jiale Zhang1,*, Xiang Cheng1, Weitong Chen1, Xiaobing Sun1
  • 1: School of Information Engineering, Yangzhou University
*Contact email: jialezhang@yzu.edu.cn

Abstract

Federated learning has achieved significant success in both academia and industry scenarios since it can train a joint model among unbalanced datasets while protecting the training data privacy. Recent research has shown that, by inferring whether a given data record belongs to the model’s training dataset, the membership information could be leaked by malicious participants. However, when deploying member inference attacks in federated learning, the core problem is how to obtain the membership inference attack data with the same distribution as the training data. In this paper, to tackle this problem, we mainly focus on exploring membership inference attacks in federated learning based on the data augmentation method. Specifically, we present two types of membership inference attacks based on the generative adversarial nets, in which a class-level attack aims to infer the global model and a user-level attack tries to focus on a specific victim. We conduct extensive experiments to evaluate the effectiveness of our proposed two types of membership inference attacks on two benchmark datasets. The experimental results have shown that both class-level and user-level attacks can achieve extraordinary attack accuracy on federated learning.

Keywords
Federated learning Membership inference Generative adversarial nets Privacy leakage
Published
2023-04-29
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-31420-9_9
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL