
Research Article
PPAPAFL: A Novel Approach to Privacy Protection and Anti-poisoning Attacks in Federated Learning
@INPROCEEDINGS{10.1007/978-3-031-51399-2_7, author={Xiangquan Chen and Chungen Xu and Bennian Dou and Pan Zhang}, title={PPAPAFL: A Novel Approach to Privacy Protection and Anti-poisoning Attacks in Federated Learning}, proceedings={Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings}, proceedings_a={TRIDENTCOM}, year={2024}, month={1}, keywords={Federated learning Privacy protection Homomorphic encryption Poisoning attacks}, doi={10.1007/978-3-031-51399-2_7} }
- Xiangquan Chen
Chungen Xu
Bennian Dou
Pan Zhang
Year: 2024
PPAPAFL: A Novel Approach to Privacy Protection and Anti-poisoning Attacks in Federated Learning
TRIDENTCOM
Springer
DOI: 10.1007/978-3-031-51399-2_7
Abstract
In the realm of distributed machine learning, although federated learning has received considerable attention, it still confronts grave challenges such as user privacy leakage and poisoning attacks. Regrettably, the demands for privacy preservation and protection against poisoning attacks are conflicting. Measures for privacy protection generally assure the indistinguishability of local parameter updates, which conversely complicates the strategy of defending against poisoning attacks by making it harder to identify malicious users. To address these issues, we propose a privacy-preserving and anti-poisoning attack federated learning (PPAPAFL) scheme. This scheme employs the CKKS homomorphic encryption technique for gradient packaging encryption, thus ensuring data privacy. Concurrently, our designed robust aggregation algorithm can effectively resist poisoning attacks, guaranteeing the model’s integrity and accuracy, and is capable of supporting heterogeneous data in a friendly manner. A plethora of comparative experimental results demonstrate that our scheme can significantly improve the model’s accuracy and robustness, drastically reduce the attack success rate, and effectively protect data privacy. In comparison with advanced schemes such as Trum and PEFL, our scheme achieves a 10–50% improvement in model accuracy and reduces the attack success rate to less than 3%.