
Research Article
Decoupled 2S-AGCN Human Behavior Recognition Based on New Partition Strategy
@INPROCEEDINGS{10.1007/978-3-031-55471-1_6, author={Liu Qiuming and Chen Longping and Wang Da and Xiao He and Zhou Yang and Wu Dong}, title={Decoupled 2S-AGCN Human Behavior Recognition Based on New Partition Strategy}, proceedings={Mobile Networks and Management. 13th EAI International Conference, MONAMI 2023, Yingtan, China, October 27-29, 2023, Proceedings}, proceedings_a={MONAMI}, year={2024}, month={3}, keywords={2S-AGCN New partition strategy DC-GCN Action recognition NTU RGB+D}, doi={10.1007/978-3-031-55471-1_6} }
- Liu Qiuming
Chen Longping
Wang Da
Xiao He
Zhou Yang
Wu Dong
Year: 2024
Decoupled 2S-AGCN Human Behavior Recognition Based on New Partition Strategy
MONAMI
Springer
DOI: 10.1007/978-3-031-55471-1_6
Abstract
Human skeleton point data has better environmental adaptability and motion expression ability than RGB video data. Therefore, the action recognition algorithm based on skeletal point data has received more and more attention and research. In recent years, skeletal point action recognition models based on graph convolutional networks (GCN) have demonstrated outstanding performance. However, most GCN-based skeletal action recognition models use three stable spatial configuration partitions, and manually set the connection relationship between each skeletal joint point. Resulting in an inability to better adapt to varying characteristics of different actions. And all channels of the input X features use the same graph convolution kernel, resulting in coupling aggregation. Contrary to the above problems, this paper proposes a new division strategy, which can better extract the feature information of neighbor nodes of nodes in the skeleton graph and adaptively obtain the connection relationship of joint nodes. And introduce Decoupled Graph Convolution (DC-GCN) to each partition to solve the coupled aggregation problem. Experiments on the NTU-RGB+D dataset show that the proposed method can achieve higher action recognition accuracy than most current methods.