
Research Article
Animated Pedagogical Agents Performing Affective Gestures Extracted from the GEMEP Dataset: Can People Recognize Their Emotions?
@INPROCEEDINGS{10.1007/978-3-031-55312-7_20, author={Magzhan Mukanova and Nicoletta Adamo and Christos Mousas and Minsoo Choi and Klay Hauser and Richard Mayer and Fangzheng Zhao}, title={Animated Pedagogical Agents Performing Affective Gestures Extracted from the GEMEP Dataset: Can People Recognize Their Emotions?}, proceedings={ArtsIT, Interactivity and Game Creation. 12th EAI International Conference, ArtsIT 2023, S\"{a}o Paulo, Brazil, November 27-29, 2023, Proceedings, Part II}, proceedings_a={ARTSIT PART 2}, year={2024}, month={3}, keywords={Affective Body Gestures Animated Pedagogical Agents GEMEP database Emotion Recognition}, doi={10.1007/978-3-031-55312-7_20} }
- Magzhan Mukanova
Nicoletta Adamo
Christos Mousas
Minsoo Choi
Klay Hauser
Richard Mayer
Fangzheng Zhao
Year: 2024
Animated Pedagogical Agents Performing Affective Gestures Extracted from the GEMEP Dataset: Can People Recognize Their Emotions?
ARTSIT PART 2
Springer
DOI: 10.1007/978-3-031-55312-7_20
Abstract
The study reported in the paper focused on applying a set of affective body gestures extracted from the Geneva Multimodal Emotion Portrayals (GEMEP) dataset to two pedagogical animated agents in an online lecture context and studying the effects of those gestures on subjects’ perception of the agents’ emotions. 131 participants completed an online survey where they watched different animations featuring a female and a male animated agent expressing six emotions (anger, joy, sadness, disgust, fear, and surprise) while delivering a lecture segment. After watching the animations, subjects were asked to identify the agents’ emotions. Findings showed that only one expression of the angry emotion by the female agent was recognized with an acceptable level of accuracy (recognition rate >75%), while the remaining emotions showed low recognition rates ranging from 1.5% to 64%. A mapping of the results on Russel’s Circumplex model of emotion showed that participants’ identification of levels of emotion arousal and valence was slightly more accurate than recognition of emotion quality but still low (recognition rates <75% for 5 out of 6 emotions). Results suggest that hand and arm gestures alone are not sufficient for conveying the agent’s emotion type and the levels of emotion valence and arousal.