Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy

Research Article

Representational bias in expression and annotation of emotions in audiovisual databases

Download748 downloads
  • @INPROCEEDINGS{10.4108/eai.20-11-2021.2314203,
        author={William  Saakyan and Olya  Hakobyan and Hanna  Drimalla},
        title={Representational bias in expression and annotation of emotions in audiovisual databases},
        proceedings={Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy},
        publisher={EAI},
        proceedings_a={CAIP},
        year={2021},
        month={12},
        keywords={datasets emotion recognition machine learning bias},
        doi={10.4108/eai.20-11-2021.2314203}
    }
    
  • William Saakyan
    Olya Hakobyan
    Hanna Drimalla
    Year: 2021
    Representational bias in expression and annotation of emotions in audiovisual databases
    CAIP
    EAI
    DOI: 10.4108/eai.20-11-2021.2314203
William Saakyan1,*, Olya Hakobyan1, Hanna Drimalla1
  • 1: Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Germany
*Contact email: wsaakyan@techfak.uni-bielefeld.de

Abstract

Emotion recognition models can be confounded by representation bias, where populations of certain gender, age or ethnoracial characteristics are not sufficiently represented in the training data. This may result in erroneous predictions with consequences of personal relevance in sensitive contexts. We systematically examined 130 emotion (audio, visual and audio-visual) datasets and found that age and ethnoracial background are the most affected dimensions, while gender is largely balanced in emotion datasets. The observed disparities between age and ethnoracial groups are compounded by scarce and inconsistent reports of demographic information. Finally, we observed a lack of information about the annotators of emotion datasets, another potential source of bias.