RE-Net: A Relation Embedded Deep Model For AU Occurrence And Intensity Estimation
Abstract
Facial action units (AUs) recognition is a multi-label classification problem, where regular spatial and temporal patterns exist in AU labels due to facial anatomy and human's behavior habits. Exploiting AU correlation is beneficial for obtaining robust AU detector or reducing the dependency of a large amount of AU-labeled samples. Several related works have been done to apply AU correlation to model's objective function or the extracted features. However, this may not be optimal as all the AUs still share the same backbone network, requiring to update the model as a whole. In this work, we present a novel AU Relation Embedded deep model (RE-Net) for AU detection that applies the AU correlation to the model's parameter space. Specifically, we format the multi-label AU detection problem as a domain adaptation task and propose a model that contains both shared and AU specific parameters, where the shared parameters are used by all the AUs, and the AU specific parameters are owned by individual AU. The AU relationship based regularization is applied to the AU specific parameters. Extensive experiments on three public benchmarks demonstrate that our method outperforms the previous work and achieves state-of-the-art performance on both AU detection task and AU intensity estimation task.
Recommended Citation
H. Yang and L. Yin, "RE-Net: A Relation Embedded Deep Model For AU Occurrence And Intensity Estimation," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12626 LNCS, pp. 137 - 153, Springer, Jan 2021.
The definitive version is available at https://doi.org/10.1007/978-3-030-69541-5_9
Department(s)
Computer Science
International Standard Book Number (ISBN)
978-303069540-8
International Standard Serial Number (ISSN)
1611-3349; 0302-9743
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2023 Springer, All rights reserved.
Publication Date
01 Jan 2021
Comments
National Science Foundation, Grant CNS-1629898