About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings

Research Article

PPAPAFL: A Novel Approach to Privacy Protection and Anti-poisoning Attacks in Federated Learning

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-51399-2_7,
        author={Xiangquan Chen and Chungen Xu and Bennian Dou and Pan Zhang},
        title={PPAPAFL: A Novel Approach to Privacy Protection and Anti-poisoning Attacks in Federated Learning},
        proceedings={Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings},
        proceedings_a={TRIDENTCOM},
        year={2024},
        month={1},
        keywords={Federated learning Privacy protection Homomorphic encryption Poisoning attacks},
        doi={10.1007/978-3-031-51399-2_7}
    }
    
  • Xiangquan Chen
    Chungen Xu
    Bennian Dou
    Pan Zhang
    Year: 2024
    PPAPAFL: A Novel Approach to Privacy Protection and Anti-poisoning Attacks in Federated Learning
    TRIDENTCOM
    Springer
    DOI: 10.1007/978-3-031-51399-2_7
Xiangquan Chen1, Chungen Xu1,*, Bennian Dou1, Pan Zhang2
  • 1: School of Mathematics and Statistics, Nanjing University of Science and Technology
  • 2: School of Cyber Science and Engineering, Nanjing University of Science and Technology
*Contact email: xuchung@njust.edu.cn

Abstract

In the realm of distributed machine learning, although federated learning has received considerable attention, it still confronts grave challenges such as user privacy leakage and poisoning attacks. Regrettably, the demands for privacy preservation and protection against poisoning attacks are conflicting. Measures for privacy protection generally assure the indistinguishability of local parameter updates, which conversely complicates the strategy of defending against poisoning attacks by making it harder to identify malicious users. To address these issues, we propose a privacy-preserving and anti-poisoning attack federated learning (PPAPAFL) scheme. This scheme employs the CKKS homomorphic encryption technique for gradient packaging encryption, thus ensuring data privacy. Concurrently, our designed robust aggregation algorithm can effectively resist poisoning attacks, guaranteeing the model’s integrity and accuracy, and is capable of supporting heterogeneous data in a friendly manner. A plethora of comparative experimental results demonstrate that our scheme can significantly improve the model’s accuracy and robustness, drastically reduce the attack success rate, and effectively protect data privacy. In comparison with advanced schemes such as Trum and PEFL, our scheme achieves a 10–50% improvement in model accuracy and reduces the attack success rate to less than 3%.

Keywords
Federated learning Privacy protection Homomorphic encryption Poisoning attacks
Published
2024-01-05
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-51399-2_7
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL