Research Article
A Lightweight Face Recognition Model based on MobileFaceNet for Limited Computation Environment
@ARTICLE{10.4108/eai.28-2-2022.173547, author={Jianyu Xiao and Guoli Jiang and Huanhua Liu}, title={A Lightweight Face Recognition Model based on MobileFaceNet for Limited Computation Environment}, journal={EAI Endorsed Transactions on Internet of Things}, volume={7}, number={27}, publisher={EAI}, journal_a={IOT}, year={2022}, month={2}, keywords={Face recognition, MobileFaceNet, weak computing environment, channel attention mechanism}, doi={10.4108/eai.28-2-2022.173547} }
- Jianyu Xiao
Guoli Jiang
Huanhua Liu
Year: 2022
A Lightweight Face Recognition Model based on MobileFaceNet for Limited Computation Environment
IOT
EAI
DOI: 10.4108/eai.28-2-2022.173547
Abstract
The face recognition method based on deep convolutional neural network is difficult to deploy in the embedding devices. In this work, we optimize the MobileFaceNet face recognition network MobileFaceNet so as to deploy it in embedding environment. Firstly, we reduce the model parameters by reducing the number of layers in MobileFaceNet. Then, the h-ReLU6 activation function is used to replace PReLU in the original model. Finally, the effective channel attention module efficient channel attention is introduced to obtain the importance of each feature channel by learning. After the optimization, the MobileFaceNet parameters are compressed to 3.4 MB, which is smaller than the original model (4.9 MB), and the mAPs reach 98.52%, 97.54% and 91.33% on the test sets of LFW, VGGFace2 and the self-built database, respectively, and the recognition time is about 85 ms/photo. It shows that the proposed method achieves a good balance between the model complexity and model performance.
Copyright © 2022 Jianyu Xiao et al., licensed to EAI. This is an open access article distributed under the terms of the Creative Commons Attribution license, which permits unlimited use, distribution and reproduction in any medium so long as the original work is properly cited.