
Research Article
Reconstructing Facial Expressions of HMD Users for Avatars in VR
@INPROCEEDINGS{10.1007/978-3-030-95531-1_5, author={Christian Felix Purps and Simon Janzer and Matthias W\o{}lfel}, title={Reconstructing Facial Expressions of HMD Users for Avatars in VR}, proceedings={ArtsIT, Interactivity and Game Creation. Creative Heritage. New Perspectives from Media Arts and Artificial Intelligence. 10th EAI International Conference, ArtsIT 2021, Virtual Event, December 2-3, 2021, Proceedings}, proceedings_a={ARTSIT}, year={2022}, month={2}, keywords={Facial expressions Avatars HMD Virtual reality}, doi={10.1007/978-3-030-95531-1_5} }
- Christian Felix Purps
Simon Janzer
Matthias Wölfel
Year: 2022
Reconstructing Facial Expressions of HMD Users for Avatars in VR
ARTSIT
Springer
DOI: 10.1007/978-3-030-95531-1_5
Abstract
Real-time recognition of human facial expressions and their transfer and use in software is now established and can be found in a variety of computer applications. Most solutions, however, do not focus on facial recognition to be used in combination with wearing a head-mounted display. In these cases, the face is partially obscured, and approaches that assume a fully visible face are not applicable. To overcome this limitation, we present a systematic approach that covers the entire pipeline from facial expression recognition using RGB images to real-time facial animation of avatars based on blendshapes for virtual reality applications. To achieve this, we (a) developed a three-stage machine learning pipeline to recognize mouth areas, extract anthropological landmarks, and detect facial muscle activations and (b) created a realistic avatar using photogrammetry, 3D modeling, and applied blendshapes that closely follow the facial action coding system (FACS). This provides an interface to our facial expression recognition system, but also allows other blendshape-oriented approaches to work with our avatar. Our facial expression recognition system performed well on common metrics and under real-time testing. Jitter and the detection or approximation of upper face facial features, however, are still an issue that needs to be addressed.