
Research Article
A Quantitative Comparison of Manual vs. Automated Facial Coding Using Real Life Observations of Fathers
@INPROCEEDINGS{10.1007/978-3-031-34586-9_25, author={Romana Burgess and Iryna Culpin and Helen Bould and Rebecca Pearson and Ian Nabney}, title={A Quantitative Comparison of Manual vs. Automated Facial Coding Using Real Life Observations of Fathers}, proceedings={Pervasive Computing Technologies for Healthcare. 16th EAI International Conference, PervasiveHealth 2022, Thessaloniki, Greece, December 12-14, 2022, Proceedings}, proceedings_a={PERVASIVEHEALTH}, year={2023}, month={6}, keywords={Automated facial coding FaceReader ALSPAC}, doi={10.1007/978-3-031-34586-9_25} }
- Romana Burgess
Iryna Culpin
Helen Bould
Rebecca Pearson
Ian Nabney
Year: 2023
A Quantitative Comparison of Manual vs. Automated Facial Coding Using Real Life Observations of Fathers
PERVASIVEHEALTH
Springer
DOI: 10.1007/978-3-031-34586-9_25
Abstract
This work explores the application of an automated facial recognition software “FaceReader” [1] to videos of fathers (n= 36), taken using headcams worn by their infants during interactions in the home. We evaluate the use of FaceReader as an alternative method to manual coding – which is both time and labour intensive – and advance understanding of the usability of this software in naturalistic interactions. Using video data taken from the Avon Longitudinal Study of Parents and Children (ALSPAC), we first manually coded fathers’ facial expressions according to an existing coding scheme, and then processed the videos using FaceReader. We used contingency tables and multivariate logistic regression models to compare the manual and automated outputs. Our results indicated low levels of facial recognition by FaceReader in naturalistic interactions (approximately 25.17% compared to manual coding), and we discuss potential causes for this (e.g., problems with lighting, the headcams themselves, and speed of infant movement). However, our logistic regression models showed that when the face was found, FaceReader predicted manually coded expressions with a mean accuracy ofM= 0.84 (range= 0.67–0.94), sensitivity ofM= 0.64 (range= 0.27–0.97), and specificity ofM= 0.81 (range= 0.51–0.97).