About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Wireless Mobile Communication and Healthcare. 12th EAI International Conference, MobiHealth 2023, Vila Real, Portugal, November 29-30, 2023 Proceedings

Research Article

Develop Method to Efficiently Apply Image-Based Facial Emotion Classification Models to Video Data

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-60665-6_26,
        author={Hee Min Yang and Joo Hyun Lee and Yu Rang Park},
        title={Develop Method to Efficiently Apply Image-Based Facial Emotion Classification Models to Video Data},
        proceedings={Wireless Mobile Communication and Healthcare. 12th EAI International Conference, MobiHealth 2023, Vila Real, Portugal, November 29-30, 2023 Proceedings},
        proceedings_a={MOBIHEALTH},
        year={2024},
        month={6},
        keywords={Facial Emotion Recognition Deep Learning Computer Vision Child},
        doi={10.1007/978-3-031-60665-6_26}
    }
    
  • Hee Min Yang
    Joo Hyun Lee
    Yu Rang Park
    Year: 2024
    Develop Method to Efficiently Apply Image-Based Facial Emotion Classification Models to Video Data
    MOBIHEALTH
    Springer
    DOI: 10.1007/978-3-031-60665-6_26
Hee Min Yang1, Joo Hyun Lee1, Yu Rang Park1,*
  • 1: Department of Biomedical Systems Informatics
*Contact email: yurangpark@yuhs.ac

Abstract

The ability to recognize emotions through facial cues, in childhood, is helpful for social interactions. Image-based facial emotion recognition models need low computing power, but cannot accept sequential information from video data. Conversely, video-based facial emotion recognition models require high computational power, so it cannot be easily applied in a low computing environment. In this paper, we propose a method that classifies the emotion from facial expression video data by applying threshold using an image-based model. The proposed method improves the accuracy of 3.67%, 24.74%, and 15.13% for each video dataset by reducing the non-emotion in the video and responding more sensitively to the expressed emotion than other methods that simply select the most frequent emotion in the video. The results of the study showed the threshold method can improve the performance of emotion classification without modifying the facial emotion classification model.

Keywords
Facial Emotion Recognition Deep Learning Computer Vision Child
Published
2024-06-28
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-60665-6_26
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL