
Research Article
Zero-Knowledge with Robust Learning: Mitigating Backdoor Attacks in Federated Learning for Enhanced Security and Privacy
@INPROCEEDINGS{10.1007/978-3-031-51399-2_6, author={Linlin Li and Chungen Xu and Pan Zhang}, title={Zero-Knowledge with Robust Learning: Mitigating Backdoor Attacks in Federated Learning for Enhanced Security and Privacy}, proceedings={Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings}, proceedings_a={TRIDENTCOM}, year={2024}, month={1}, keywords={Federated learning Backdoor attack Zero-knowledge proof}, doi={10.1007/978-3-031-51399-2_6} }
- Linlin Li
Chungen Xu
Pan Zhang
Year: 2024
Zero-Knowledge with Robust Learning: Mitigating Backdoor Attacks in Federated Learning for Enhanced Security and Privacy
TRIDENTCOM
Springer
DOI: 10.1007/978-3-031-51399-2_6
Abstract
As a distributed machine learning framework, federated learning addresses the challenges of data isolation and privacy concerns, ensuring that user data remains private during the model training process. However, the privacy-preserving nature of federated learning also makes it has vulnerability to security attacks, particularly in the form of backdoor attacks. These attacks aim to compromise the integrity of the model by embedding a malicious behavior that can be triggered under specific conditions. In our study, aiming to counteract backdoor threats in federated learning, we introduce a new protective mechanism termed zero-knowledge with robust learning (ZKRL). The ZKRL scheme introduces the robust learning rate and non-interactive zero-knowledge proof techniques to filter out malicious model updates and preserve the privacy of the global model parameters of the federated learning process. The extensive experiments conduct on real-world data demonstrate its effectiveness in improving the accuracy on the verification set by 2(\%)and significantly reducing the accuracy of backdoor attacks compared to existing state-of-the-art defense schemes. In summary, the proposed ZKRL defense scheme provides a robust solution for protecting federated learning models against backdoor attacks, ensuring the integrity of the trained models while preserving user privacy.