
Research Article
Decentralized Federated Learning: A Defense Against Gradient Inversion Attack
@INPROCEEDINGS{10.1007/978-3-031-27041-3_4, author={Guangxi Lu and Zuobin Xiong and Ruinian Li and Wei Li}, title={Decentralized Federated Learning: A Defense Against Gradient Inversion Attack}, proceedings={Wireless Internet. 15th EAI International Conference, WiCON 2022, Virtual Event, November 2022, Proceedings}, proceedings_a={WICON}, year={2023}, month={2}, keywords={Federated learning Peer to peer network Privacy protection}, doi={10.1007/978-3-031-27041-3_4} }
- Guangxi Lu
Zuobin Xiong
Ruinian Li
Wei Li
Year: 2023
Decentralized Federated Learning: A Defense Against Gradient Inversion Attack
WICON
Springer
DOI: 10.1007/978-3-031-27041-3_4
Abstract
Federated learning (FL) is a machine learning technique that enables data to be stored and calculated on geographically distributed local clients. In centralized FL, there is an orchestrating system server responsible for aggregating local client parameters. Such a design is vulnerable to gradient inversion attacks where a malicious central server can restore the client’s data through the model gradients. This paper proposes a Decentralized Federated Learning (DFL) method to mitigate the gradient inversion attack. We design a federated learning framework in a decentralized structure, where only peer-to-peer communication is adopted to transfer model parameters for aggregating and updating local models. Extensive experiments and detailed case studies are conducted on a real dataset, through which we demonstrate that the proposed DFL mechanism has excellent performance and is resistant to gradient inversion attack.