About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Security and Privacy in New Computing Environments. 4th EAI International Conference, SPNCE 2021, Virtual Event, December 10-11, 2021, Proceedings

Research Article

FL-DP: Differential Private Federated Neural Network

Download(Requires a free EAI acccount)
4 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-96791-8_20,
        author={Muhammad Maaz Irfan and Lin Wang and Sheraz Ali and Shan Jing and Chuan Zhao},
        title={FL-DP: Differential Private Federated Neural Network},
        proceedings={Security and Privacy in New Computing Environments. 4th EAI International Conference, SPNCE 2021, Virtual Event, December 10-11, 2021, Proceedings},
        proceedings_a={SPNCE},
        year={2022},
        month={3},
        keywords={Federated learning Data integrity Privacy-preserving Differential privacy},
        doi={10.1007/978-3-030-96791-8_20}
    }
    
  • Muhammad Maaz Irfan
    Lin Wang
    Sheraz Ali
    Shan Jing
    Chuan Zhao
    Year: 2022
    FL-DP: Differential Private Federated Neural Network
    SPNCE
    Springer
    DOI: 10.1007/978-3-030-96791-8_20
Muhammad Maaz Irfan1, Lin Wang1, Sheraz Ali1, Shan Jing1,*, Chuan Zhao1
  • 1: School of Information Science and Engineering, University of Jinan
*Contact email: jingshan@ujn.edu.cn

Abstract

The rapid development of the Internet and machine learning has brought convenience and comfort to users’ lives. However, due to various attacks and sensitive data leaks, the large amount of data used in machine learning training has made the issue of personal privacy a growing concern as well. In the era of big data, anyone’s information can be stolen, which makes many people feel uneasy. We propose a new approach called FL-DP (Federated Learning Based on Differential Privacy). Based on differential privacy, this approach can effectively restrict the adversary’s access to the client model, which has the ability to limit data leakage. In this framework, we use the DP Laplace mechanism. It is ensured that all operations (including the server-side aggregation process) are secure and do not leak any information about the training data. Also, we consider adding multiple noises to the preprocessing process so that the client-side data is secure during the training process. Our approach not only provides client-level privacy, but also balances efficiency and privacy. After evaluation, our approach is highly scalable and can be applied to most machine learning based applications.

Keywords
Federated learning Data integrity Privacy-preserving Differential privacy
Published
2022-03-13
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-96791-8_20
Copyright © 2021–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL