Mobile Computing, Applications, and Services. 4th International Conference, MobiCASE 2012, Seattle, WA, USA, October 11-12, 2012. Revised Selected Papers

Research Article

Visage: A Face Interpretation Engine for Smartphone Applications

Download121 downloads
  • @INPROCEEDINGS{10.1007/978-3-642-36632-1_9,
        author={Xiaochao Yang and Chuang-Wen You and Hong Lu and Mu Lin and Nicholas Lane and Andrew Campbell},
        title={Visage: A Face Interpretation Engine for Smartphone Applications},
        proceedings={Mobile Computing, Applications, and Services. 4th International Conference, MobiCASE 2012, Seattle, WA, USA, October 11-12, 2012. Revised Selected Papers},
        proceedings_a={MOBICASE},
        year={2013},
        month={2},
        keywords={face-aware mobile application face interpretation engine},
        doi={10.1007/978-3-642-36632-1_9}
    }
    
  • Xiaochao Yang
    Chuang-Wen You
    Hong Lu
    Mu Lin
    Nicholas Lane
    Andrew Campbell
    Year: 2013
    Visage: A Face Interpretation Engine for Smartphone Applications
    MOBICASE
    Springer
    DOI: 10.1007/978-3-642-36632-1_9
Xiaochao Yang1,*, Chuang-Wen You1,*, Hong Lu2,*, Mu Lin1,*, Nicholas Lane3,*, Andrew Campbell1,*
  • 1: Dartmouth College
  • 2: Intel Lab
  • 3: Microsoft Research Asia
*Contact email: Xiaochao.Yang@dartmouth.edu, chuang-wen.you@dartmouth.edu, hong.lu@intel.com, mu.lin@dartmouth.edu, niclane@microsoft.com, campbell@cs.dartmouth.edu

Abstract

Smartphones represent powerful mobile computing devices enabling a wide variety of new applications and opportunities for human interaction, sensing and communications. Because smartphones come with front-facing cameras, it is now possible for users to interact and drive applications based on their facial responses to enable participatory and opportunistic face-aware applications. This paper presents the design, implementation and evaluation of a robust, real-time face interpretation engine for smartphones, called , that enables a new class of face-aware applications for smartphones. Visage fuses data streams from the phone’s front-facing camera and built-in motion sensors to infer, in an energy-efficient manner, the user’s 3D head poses (i.e., the pitch, roll and yaw of user’s heads with respect to the phone) and facial expressions (e.g., happy, sad, angry, etc.). Visage supports a set of novel sensing, tracking, and machine learning algorithms on the phone, which are specifically designed to deal with challenges presented by user mobility, varying phone contexts, and resource limitations. Results demonstrate that Visage is effective in different real-world scenarios. Furthermore, we developed two distinct proof-of-concept applications, Streetview+ and Mood Profiler driven by Visage.