
Research Article
Why Not Model Privacy?: An Efficient and Practical Scheme for Federated Learning Model Security
@INPROCEEDINGS{10.1007/978-3-031-60347-1_33, author={Wang Shuai and Renwan Bi and Youliang Tian and Jinbo Xiong}, title={Why Not Model Privacy?: An Efficient and Practical Scheme for Federated Learning Model Security}, proceedings={Mobile Multimedia Communications. 16th EAI International Conference, MobiMedia 2023, Guilin, China, July 22-24, 2023, Proceedings}, proceedings_a={MOBIMEDIA}, year={2024}, month={10}, keywords={privacy-preserving federated learning homomorphic encryption model security}, doi={10.1007/978-3-031-60347-1_33} }
- Wang Shuai
Renwan Bi
Youliang Tian
Jinbo Xiong
Year: 2024
Why Not Model Privacy?: An Efficient and Practical Scheme for Federated Learning Model Security
MOBIMEDIA
Springer
DOI: 10.1007/978-3-031-60347-1_33
Abstract
Privacy-Preserving Federated Learning (PPFL), a new paradigm for secure and efficient Federated Learning, has the advantage of security aggregation without losing accuracy. However, PPFL focuses on protecting the privacy of private data, but the privacy of models is equally important. Model sharing among institutions increases the risk of model privacy leakage and economic loss when federated learning is applied to real-world scenarios. To address the above issues, we design a model privacy-preserving scheme based on fully homomorphic encryption, PriM. Specifically, we adopt the CKKS fully homomorphic encryption scheme to guarantee the security of the model, which supports both floating-point and vector encryption and avoids the significant overhead of single-value encryption in the homomorphic encryption-based PPFL scheme. Then, we maintain the confidentiality of the model computation process without revealing the model information to the participating entities. Finally, experiments show that our scheme achieves equal efficiency with the baseline algorithm FedAvg but is more realistic.