11th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services

Research Article

SenseMe: A System for Continuous, On-Device, and Multi-dimensional Context and Activity Recognition

Download718 downloads
  • @INPROCEEDINGS{10.4108/icst.mobiquitous.2014.257654,
        author={Preeti Bhargava and Nick Gramsky and Ashok Agrawala},
        title={SenseMe: A System for Continuous, On-Device, and Multi-dimensional Context and Activity Recognition},
        proceedings={11th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services},
        publisher={ICST},
        proceedings_a={MOBIQUITOUS},
        year={2014},
        month={11},
        keywords={context-aware computing context and activity recognition mobile phone sensing mobile systems and applications},
        doi={10.4108/icst.mobiquitous.2014.257654}
    }
    
  • Preeti Bhargava
    Nick Gramsky
    Ashok Agrawala
    Year: 2014
    SenseMe: A System for Continuous, On-Device, and Multi-dimensional Context and Activity Recognition
    MOBIQUITOUS
    ICST
    DOI: 10.4108/icst.mobiquitous.2014.257654
Preeti Bhargava1,*, Nick Gramsky1, Ashok Agrawala1
  • 1: Department of Computer Science, University of Maryland, College Park
*Contact email: prbharga@cs.umd.edu

Abstract

In order to make context-aware systems more effective and provide timely, personalized and relevant information to a user, the context or situation of the user must be clearly defined along several dimensions. To this end, the system needs to simultaneously recognize multiple dimensions of the user's situation such as location, physical activity etc. in an automated and unobtrusive manner. In this paper, we present SenseMe - a system that leverages a user's smartphone and its multiple sensors in order to perform continuous, on-device, and multi-dimensional context and activity recognition. It recognizes five dimensions of a user's situation in a robust, automated, scalable, power efficient and non-invasive manner to paint a context-rich picture of the user. We evaluate SenseMe against several metrics with the aid of 2 two-week long live deployments involving 15 participants. We demonstrate improved or comparable accuracy with respect to existing systems without requiring any user calibration or input.