
Research Article
SelfAct: Personalized Activity Recognition Based on Self-Supervised and Active Learning
@INPROCEEDINGS{10.1007/978-3-031-63989-0_19, author={Luca Arrotta and Gabriele Civitarese and Claudio Bettini}, title={SelfAct: Personalized Activity Recognition Based on Self-Supervised and Active Learning}, proceedings={Mobile and Ubiquitous Systems: Computing, Networking and Services. 20th EAI International Conference, MobiQuitous 2023, Melbourne, VIC, Australia, November 14--17, 2023, Proceedings, Part I}, proceedings_a={MOBIQUITOUS}, year={2024}, month={7}, keywords={Human Activity Recognition Self-supervised Learning Active Learning}, doi={10.1007/978-3-031-63989-0_19} }
- Luca Arrotta
Gabriele Civitarese
Claudio Bettini
Year: 2024
SelfAct: Personalized Activity Recognition Based on Self-Supervised and Active Learning
MOBIQUITOUS
Springer
DOI: 10.1007/978-3-031-63989-0_19
Abstract
Supervised Deep Learning (DL) models are currently the leading approach for sensor-based Human Activity Recognition (HAR) on wearable and mobile devices. However, training them requires large amounts of labeled data, whose collection is often time-consuming, expensive, and error-prone. At the same time, due to the intra- and inter-variability of activity execution, activity models should be personalized for each user. In this work, we proposeSelfAct: a novel framework for HAR that combines self-supervised and active learning to mitigate these problems.SelfActleverages a large pool of unlabeled data collected from many users to pre-train through self-supervision a DL model, with the goal of learning a meaningful and efficient latent representation of sensor data. The resulting pre-trained model can be locally used by new users, which will fine-tune it thanks to a novel unsupervised active learning strategy. Our experiments on two publicly available HAR datasets demonstrate thatSelfActachieves results that are close to or even better than those reached by fully supervised approaches with only a few active learning queries.