Mobile Computing, Applications, and Services. 4th International Conference, MobiCASE 2012, Seattle, WA, USA, October 11-12, 2012. Revised Selected Papers

Research Article

Automatic Annotation of Daily Activity from Smartphone-Based Multisensory Streams

Download
497 downloads
  • @INPROCEEDINGS{10.1007/978-3-642-36632-1_19,
        author={Jihun Hamm and Benjamin Stone and Mikhail Belkin and Simon Dennis},
        title={Automatic Annotation of Daily Activity from Smartphone-Based Multisensory Streams},
        proceedings={Mobile Computing, Applications, and Services. 4th International Conference, MobiCASE 2012, Seattle, WA, USA, October 11-12, 2012. Revised Selected Papers},
        proceedings_a={MOBICASE},
        year={2013},
        month={2},
        keywords={mobile computing lifelogging activity recognition automatic annotation},
        doi={10.1007/978-3-642-36632-1_19}
    }
    
  • Jihun Hamm
    Benjamin Stone
    Mikhail Belkin
    Simon Dennis
    Year: 2013
    Automatic Annotation of Daily Activity from Smartphone-Based Multisensory Streams
    MOBICASE
    Springer
    DOI: 10.1007/978-3-642-36632-1_19
Jihun Hamm1, Benjamin Stone1, Mikhail Belkin1, Simon Dennis1
  • 1: The Ohio State University

Abstract

We present a system for automatic annotation of daily experience from multisensory streams on smartphones. Using smartphones as platform facilitates collection of naturalistic daily activity, which is difficult to collect with multiple on-body sensors or array of sensors affixed to indoor locations. However, recognizing daily activities in unconstrained settings is more challenging than in controlled environments: 1) multiples heterogeneous sensors equipped in smartphones are noisier, asynchronous, vary in sampling rates and can have missing data; 2) unconstrained daily activities are continuous, can occur concurrently, and have fuzzy onset and offset boundaries; 3) ground-truth labels obtained from the user’s self-report can be erroneous and accurate only in a coarse time scale. To handle these problems, we present in this paper a flexible framework for incorporating heterogeneous sensory modalities combined with state-of-the-art classifiers for sequence labeling. We evaluate the system with real-life data containing 11721 minutes of multisensory recordings, and demonstrate the accuracy and efficiency of the proposed system for practical lifelogging applications.