Context-Aware Systems and Applications. Second International Conference, ICCASA 2013, Phu Quoc Island, Vietnam, November 25-26, 2013, Revised Selected Papers

Research Article

Towards Classification Based Human Activity Recognition in Video Sequences

Download
492 downloads
  • @INPROCEEDINGS{10.1007/978-3-319-05939-6_21,
        author={Nguyen Binh and Swati Nigam and Ashish Khare},
        title={Towards Classification Based Human Activity Recognition in Video Sequences},
        proceedings={Context-Aware Systems and Applications. Second International Conference, ICCASA 2013, Phu Quoc Island, Vietnam, November 25-26, 2013, Revised Selected Papers},
        proceedings_a={ICCASA},
        year={2014},
        month={6},
        keywords={Human activity recognition Classification Feature descriptors},
        doi={10.1007/978-3-319-05939-6_21}
    }
    
  • Nguyen Binh
    Swati Nigam
    Ashish Khare
    Year: 2014
    Towards Classification Based Human Activity Recognition in Video Sequences
    ICCASA
    Springer
    DOI: 10.1007/978-3-319-05939-6_21
Nguyen Binh1,*, Swati Nigam2,*, Ashish Khare2,*
  • 1: Ho Chi Minh City University of Technology
  • 2: University of Allahabad
*Contact email: ntbinh@cse.hcmut.edu.vn, swatinigam.au@gmail.com, ashishkhare@hotmail.com

Abstract

Recognizing human activities is an important component of a context aware system. In this paper, we propose a classification based human activity recognition approach. This approach recognizes different human activities based on a local shape feature descriptor and pattern classifier. We have used a novel local shape feature descriptor which is integration of central moments and local binary patterns. Classifier used is flexible binary support vector machine. Experimental evaluations have been performed on standard Weizmann activity video dataset. Six different activities have been considered for evaluation of the proposed method. Two activities have been selected at a time with binary classifier. These are walk-run, bend-jump, and jack-skip pairs. Experimental results and comparisons with other methods, demonstrate that the proposed method performs well and it is capable of recognizing six different human activities in videos.