
Research Article
Facial Action Unit Detection by Exploring the Weak Relationships Between AU Labels
@INPROCEEDINGS{10.1007/978-3-031-24386-8_26, author={Mengke Tian and Hengliang Zhu and Yong Wang and Yimao Cai and Feng Liu and Pengrong Lin and Yingzhuo Huang and Xiaochen Xie}, title={Facial Action Unit Detection by Exploring the Weak Relationships Between AU Labels}, proceedings={Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II}, proceedings_a={COLLABORATECOM PART 2}, year={2023}, month={1}, keywords={Action Unit(AU) detection Emotion Semantic relation}, doi={10.1007/978-3-031-24386-8_26} }
- Mengke Tian
Hengliang Zhu
Yong Wang
Yimao Cai
Feng Liu
Pengrong Lin
Yingzhuo Huang
Xiaochen Xie
Year: 2023
Facial Action Unit Detection by Exploring the Weak Relationships Between AU Labels
COLLABORATECOM PART 2
Springer
DOI: 10.1007/978-3-031-24386-8_26
Abstract
In recent years, facial action unit (AU) detection attracts more and more attentions and great progress has been made. However, few approaches solve AU detection problem by applying the emotion information, and the specific influence of emotion categories to AU detection is not investigated. In this paper, we firstly explore the relationship between emotion categories and AU labels, and study the influence of emotion for AU detection. With emotion weak labels, we propose a simple yet efficient deep network that uses limited emotion labels to constraint the AU detection. The proposed network contains two architectures: a main net and an assistant net. The main net can learn semantic relation between AUs, especially the AUs related to emotions. Moreover, we design a dual pooling module embedded into the main net to further promote the results. Extensive experiments on two datasets show that the AU detection can obtain benefits with the weak labels of AUs. The proposed method has a significant improvement on baseline and achieves state-of-the-art performance compared with other methods. Furthermore, because only the main net is used for testing, our model is very fast and achieves over 278 fps.