About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II

Research Article

A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-24386-8_8,
        author={Yupeng Deng and Xiong Li and Jiabei He and Yuzhen Liu and Wei Liang},
        title={A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy},
        proceedings={Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II},
        proceedings_a={COLLABORATECOM PART 2},
        year={2023},
        month={1},
        keywords={Differential privacy Stochastic gradient descent Empirical risk minimization Machine learning},
        doi={10.1007/978-3-031-24386-8_8}
    }
    
  • Yupeng Deng
    Xiong Li
    Jiabei He
    Yuzhen Liu
    Wei Liang
    Year: 2023
    A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy
    COLLABORATECOM PART 2
    Springer
    DOI: 10.1007/978-3-031-24386-8_8
Yupeng Deng1, Xiong Li2,*, Jiabei He3, Yuzhen Liu1, Wei Liang1
  • 1: School of Computer Science and Engineering, Hunan University of Science and Technology
  • 2: School of Computer Science and Engineering, University of Electronic Science and Technology of China
  • 3: College of Computer Science, Nankai University
*Contact email: lixiong@uestc.edu.cn

Abstract

The application of differential privacy (DP) in federated learning can effectively protect users’ privacy from inference attacks. However, privacy budget allocation strategies in most DP schemes not only fail to be applied in complex scenarios but also severely damage the model usability. This paper designs a stochastic gradient descent algorithm based on adaptive DP, which allocates a suitable privacy budget for each iteration according to the tendency of the noise gradients. As the model parameters keep optimizing, the scheme adaptively controls the noise scale to match the decreased gradients, resizing the allocated privacy budget when too small. Compared with other DP schemes, our scheme flexibly reduces the negative effect of added noise on model convergence and consequently improves the training efficiency of the model. We implemented the scheme on five datasets (Adult, BANK, etc.) with three models (SVM, CNN, etc.) and compared it with other popular schemes in the classification accuracy and training time. Our scheme proved to be efficient and practical, which achieved 2% better than the second one in model accuracy, costing merely 4% of its training time with the 0.05 privacy budget.

Keywords
Differential privacy Stochastic gradient descent Empirical risk minimization Machine learning
Published
2023-01-25
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-24386-8_8
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL