About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Big Data Technologies and Applications. 13th EAI International Conference, BDTA 2023, Edinburgh, UK, August 23-24, 2023, Proceedings

Research Article

Can Federated Models Be Rectified Through Learning Negative Gradients?

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-52265-9_2,
        author={Ahsen Tahir and Zhiyuan Tan and Kehinde O. Babaagba},
        title={Can Federated Models Be Rectified Through Learning Negative Gradients?},
        proceedings={Big Data Technologies and Applications. 13th EAI International Conference, BDTA 2023, Edinburgh, UK, August 23-24, 2023, Proceedings},
        proceedings_a={BDTA},
        year={2024},
        month={1},
        keywords={Federated Learning Machine Unlearning Negative Gradients Model Rectification},
        doi={10.1007/978-3-031-52265-9_2}
    }
    
  • Ahsen Tahir
    Zhiyuan Tan
    Kehinde O. Babaagba
    Year: 2024
    Can Federated Models Be Rectified Through Learning Negative Gradients?
    BDTA
    Springer
    DOI: 10.1007/978-3-031-52265-9_2
Ahsen Tahir1, Zhiyuan Tan1,*, Kehinde O. Babaagba1
  • 1: School of Computing, Engineering and the Built Environment, Edinburgh Napier University
*Contact email: Z.Tan@napier.ac.uk

Abstract

Federated Learning (FL) is a method to train machine learning (ML) models in a decentralised manner, while preserving the privacy of data from multiple clients. However, FL is vulnerable to malicious attacks, such as poisoning attacks, and is challenged by the GDPR’s “right to be forgotten”. This paper introduces a negative gradient-based machine learning technique to address these issues. Experiments on the MNIST dataset show that subtracting local model parameters can remove the influence of the respective training data on the global model and consequently “unlearn” the model in the FL paradigm. Although the performance of the resulting global model decreases, the proposed technique maintains the validation accuracy of the model above 90%. This impact on performance is acceptable for an FL model. It is important to note that the experimental work carried out demonstrates that in application areas where data deletion in ML is a necessity, this approach represents a significant advancement in the development of secure and robust FL systems.

Keywords
Federated Learning Machine Unlearning Negative Gradients Model Rectification
Published
2024-01-31
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-52265-9_2
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL