
Research Article
ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition
@INPROCEEDINGS{10.1007/978-3-030-99203-3_9, author={Zixuan Yan and Rabih Younes and Jason Forsyth}, title={ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition}, proceedings={Mobile Computing, Applications, and Services. 12th EAI International Conference, MobiCASE 2021, Virtual Event, November 13--14, 2021, Proceedings}, proceedings_a={MOBICASE}, year={2022}, month={3}, keywords={Human activity recognition (HAR) Convolutional neural network (CNN) ResNet Saliency map}, doi={10.1007/978-3-030-99203-3_9} }
- Zixuan Yan
Rabih Younes
Jason Forsyth
Year: 2022
ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition
MOBICASE
Springer
DOI: 10.1007/978-3-030-99203-3_9
Abstract
Human activity recognition (HAR) has been adopting deep learning to substitute well-established analysis techniques that rely on hand-crafted feature extraction and classication techniques. However, the architecture of convolutional neural network (CNN) models used in HAR tasks still mostly uses VGG-like models while more and more novel architectures keep emerging. In this work, we present a novel approach to HAR by incorporating elements of residual learning in our ResNet-like CNN model to improve existing approaches by reducing the computational complexity of the recognition task without sacrificing accuracy. Specifically, we design our ResNet-like CNN based on residual learning and achieve nearly 1% better accuracy than the state-of-the-art, with over 10 times parameter reduction. At the same time, we adopt the Saliency Map method to visualize the importance of every input channel. This enables us to conduct further work such as dimension reduction to improve computational efficiency or finding the optimal sensor node(s) position(s).