Research Article
Residual network based on convolution attention model and feature fusion for dance motion recognition
@ARTICLE{10.4108/eai.16-12-2021.172434, author={Dianhuai Shen and Xueying Jiang and Lin Teng}, title={Residual network based on convolution attention model and feature fusion for dance motion recognition}, journal={EAI Endorsed Transactions on Scalable Information Systems}, volume={9}, number={4}, publisher={EAI}, journal_a={SIS}, year={2021}, month={12}, keywords={dance motion recognition, residual network, convolution attention model, future fusion}, doi={10.4108/eai.16-12-2021.172434} }
- Dianhuai Shen
Xueying Jiang
Lin Teng
Year: 2021
Residual network based on convolution attention model and feature fusion for dance motion recognition
SIS
EAI
DOI: 10.4108/eai.16-12-2021.172434
Abstract
Traditional posture recognition methods have the problems of low accuracy. Therefore, we propose a residual network based on convolution attention model and future fusion for dance motion recognition. Firstly, the fusion features of the relative position, angle and limb length ratio of human body are selected by combining the information of bone key points. The shallow features of the original dance image are extracted and compressed by convolution layer and pooling layer. Then it uses the stacked residual to learn deep features, the gradient dispersion and network degradation can be alleviated. The convolutional attention module is used to assign weighted values to the deep degradation features of the dance. Finally, dance motion detection in complex dance scenes can be realized. The dance movement recognition method proposed in this paper can accurately identify dance motion. Compared with other recognition algorithms, this new algorithm has the best recognition accuracy and faster recognition efficiency.
Copyright © 2021 Dianhuai Shen et al., licensed to EAI. This is an open access article distributed under the terms of the Creative Commons Attribution license, which permits unlimited use, distribution and reproduction in any medium so long as the original work is properly cited.