
Research Article
HeSUN: Homomorphic Encryption for Secure Unbounded Neural Network Inference
@INPROCEEDINGS{10.1007/978-3-031-64948-6_21, author={Duy Tung Khanh Nguyen and Dung Hoang Duong and Willy Susilo and Yang-Wai Chow}, title={HeSUN: Homomorphic Encryption for Secure Unbounded Neural Network Inference}, proceedings={Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part I}, proceedings_a={SECURECOMM}, year={2024}, month={10}, keywords={Privacy preserving machine learning Homomorphic encryption}, doi={10.1007/978-3-031-64948-6_21} }
- Duy Tung Khanh Nguyen
Dung Hoang Duong
Willy Susilo
Yang-Wai Chow
Year: 2024
HeSUN: Homomorphic Encryption for Secure Unbounded Neural Network Inference
SECURECOMM
Springer
DOI: 10.1007/978-3-031-64948-6_21
Abstract
In recent years, homomorphic encryption (HE) has become a crucial tool for secure neural network inference (SNNI), which enables the server to classify encrypted data of clients while guaranteeing privacy. However, current HE-based frameworks limit the depth of neural networks. The main reason for the limitation is the noise and scaling factor growth in ciphertext after successive homomorphic operators. Gentry’s bootstrapping is normally the solution for addressing noise growth. However, bootstrapping is a costly procedure and requires the circular security assumption. For scaling factor growth, it remains a challenging problem because rescaling is based on division, which is not natively supported by current HE schemes. This paper proposes a double ciphertext refreshing protocol called DoubleR, which refreshes noise and scaling factor growth at the same time. Our protocol is proven secure in the semi-honest model without introducing additional assumptions. The experimental results show that our protocol outperforms bootstrapping by(300 {\times })in running time. Based on DoubleR, we build a versatile framework for SNNI called HeSUN, which significantly accelerates the inference time with comparable communication costs.