Research Article
ToFi: An Algorithm to Defend Against Byzantine Attacks in Federated Learning
@INPROCEEDINGS{10.1007/978-3-030-90019-9_12, author={Qi Xia and Zeyi Tao and Qun Li}, title={ToFi: An Algorithm to Defend Against Byzantine Attacks in Federated Learning}, proceedings={Security and Privacy in Communication Networks. 17th EAI International Conference, SecureComm 2021, Virtual Event, September 6--9, 2021, Proceedings, Part I}, proceedings_a={SECURECOMM}, year={2021}, month={11}, keywords={Byzantine attacks Federated learning}, doi={10.1007/978-3-030-90019-9_12} }
- Qi Xia
Zeyi Tao
Qun Li
Year: 2021
ToFi: An Algorithm to Defend Against Byzantine Attacks in Federated Learning
SECURECOMM
Springer
DOI: 10.1007/978-3-030-90019-9_12
Abstract
In distributed gradient descent based machine learning model training, workers periodically upload locally computed gradients or weights to the parameter server (). Byzantine attacks take place when some workers upload wrong gradients or weights, i.e., the information received by the is not always the true values computed by workers. Approaches such as score-based, median-based, and distance-based defense algorithms were proposed previously, but all of them made the asumptions: (1) the dataset on each worker is independent and identically distributed (i.i.d.), and (2) the majority of all participating workers are honest. These assumptions are not realistic in federated learning where each worker may keep its non-i.i.d. private dataset and malicious workers may take over the majority in some iterations. In this paper, we propose a novel reference dataset based algorithm along with a practical Two-Filter algorithm (ToFi) to defend against Byzantine attacks in federated learning. Our experiments highlight the effectiveness of our algorithm compared with previous algorithms in different settings.