
Research Article
A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy
@INPROCEEDINGS{10.1007/978-3-031-24386-8_8, author={Yupeng Deng and Xiong Li and Jiabei He and Yuzhen Liu and Wei Liang}, title={A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy}, proceedings={Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II}, proceedings_a={COLLABORATECOM PART 2}, year={2023}, month={1}, keywords={Differential privacy Stochastic gradient descent Empirical risk minimization Machine learning}, doi={10.1007/978-3-031-24386-8_8} }
- Yupeng Deng
Xiong Li
Jiabei He
Yuzhen Liu
Wei Liang
Year: 2023
A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy
COLLABORATECOM PART 2
Springer
DOI: 10.1007/978-3-031-24386-8_8
Abstract
The application of differential privacy (DP) in federated learning can effectively protect users’ privacy from inference attacks. However, privacy budget allocation strategies in most DP schemes not only fail to be applied in complex scenarios but also severely damage the model usability. This paper designs a stochastic gradient descent algorithm based on adaptive DP, which allocates a suitable privacy budget for each iteration according to the tendency of the noise gradients. As the model parameters keep optimizing, the scheme adaptively controls the noise scale to match the decreased gradients, resizing the allocated privacy budget when too small. Compared with other DP schemes, our scheme flexibly reduces the negative effect of added noise on model convergence and consequently improves the training efficiency of the model. We implemented the scheme on five datasets (Adult, BANK, etc.) with three models (SVM, CNN, etc.) and compared it with other popular schemes in the classification accuracy and training time. Our scheme proved to be efficient and practical, which achieved 2% better than the second one in model accuracy, costing merely 4% of its training time with the 0.05 privacy budget.