
Research Article
A Fast and Accurate Non-interactive Privacy-Preserving Neural Network Inference Framework
@INPROCEEDINGS{10.1007/978-3-031-51399-2_9, author={Hongyao Tao and Chungen Xu and Pan Zhang}, title={A Fast and Accurate Non-interactive Privacy-Preserving Neural Network Inference Framework}, proceedings={Tools for Design, Implementation and Verification of Emerging Information Technologies. 18th EAI International Conference, TRIDENTCOM 2023, Nanjing, China, November 11-13, 2023, Proceedings}, proceedings_a={TRIDENTCOM}, year={2024}, month={1}, keywords={Homomorphic encryption Privacy-preserving Neural networks}, doi={10.1007/978-3-031-51399-2_9} }
- Hongyao Tao
Chungen Xu
Pan Zhang
Year: 2024
A Fast and Accurate Non-interactive Privacy-Preserving Neural Network Inference Framework
TRIDENTCOM
Springer
DOI: 10.1007/978-3-031-51399-2_9
Abstract
With the remarkable successes of machine learning, it is becoming increasingly popular and widespread. Machine learning as a Service (MLaaS) provided by cloud services is widely utilized to address the challenge of users unable to bear the burden of training machine learning models. However, the privacy issues involved present a significant challenge. Homomorphic encryption, known for its capability to perform efficient operations on ciphertexts, is widely employed in Privacy computing domain. In order to address the security vulnerabilities and excessive communication and computation costs of interactive privacy-preserving neural networks, and in light of the significant time consumption of linear layers and the challenges SIMD HE faces in computing arbitrary nonlinear functions precisely, we propose a non-interactive framework for privacy-preserving neural networks that accelerates linear computations and ensures accurate computation of any non-linear functions. Specifically, we utilize CKKS encryption to enable private neural network inference under floating-point arithmetic. Leveraging the characteristics of both wordwise HE and bitwise HE, we design a non-interactive and fast matrix multiplication scheme, achieving up to 500× acceleration across different matrix dimensions. By transforming various types of homomorphic encryption ciphertexts and employing lookup tables, we realize accurate computation of arbitrary non-linear operations without requiring interaction. Experimental results demonstrate that our framework achieves the same level of accuracy as pre-trained neural network models on plaintext without incurring any additional accuracy loss.