
Research Article
An Optimized Eight-Layer Convolutional Neural Network Based on Blocks for Chinese Fingerspelling Sign Language Recognition
@INPROCEEDINGS{10.1007/978-3-031-50580-5_2, author={Huiwen Chu and Chenlei Jiang and Jingwen Xu and Qisheng Ye and Xianwei Jiang}, title={An Optimized Eight-Layer Convolutional Neural Network Based on Blocks for Chinese Fingerspelling Sign Language Recognition}, proceedings={Multimedia Technology and Enhanced Learning. 5th EAI International Conference, ICMTEL 2023, Leicester, UK, April 28-29, 2023, Proceedings, Part IV}, proceedings_a={ICMTEL PART 4}, year={2024}, month={2}, keywords={fingerspelling sing language recognition batch normalization data augmentation pooling dropout}, doi={10.1007/978-3-031-50580-5_2} }
- Huiwen Chu
Chenlei Jiang
Jingwen Xu
Qisheng Ye
Xianwei Jiang
Year: 2024
An Optimized Eight-Layer Convolutional Neural Network Based on Blocks for Chinese Fingerspelling Sign Language Recognition
ICMTEL PART 4
Springer
DOI: 10.1007/978-3-031-50580-5_2
Abstract
Sign language plays a significant role in communication for the hearing-impaired and the speech-impaired. Sign language recognition smooths the barriers between the disabled and the healthy. However, the method has been difficult for artificial intelligence to use because it requires complex gestures that must be recognized in real time and with great accuracy. Fingerspelling sign language recognition methods based on convolutional neural networks have gradually gained popularity in recent years thanks to the advancement of deep learning techniques. Recognition of sign language using finger spelling has taken center stage. This study proposed an optimized eight-layer convolutional neural network based on blocks (CNN-BB) for fingerspelling recognition of Chinese sign language. Three different blocks: Conv-BN-ReLU-Pooling, Conv-BN-ReLU, Conv-BN-ReLU-BN were adopted and some advanced technologies such as bath normalization, dropout, pooling and data augmentation were employed. The results displayed that our CNN-BB achieved MSD of 93.32 ± 1.42%, which is superior to eight state-of-the-art approaches.