Abstract
Visual attention has been extensively studied for learning fine-grained features in both facial expression recognition (FER) and Action Unit (AU) detection. A broad range of previous research has explored how to use attention modules to localize detailed facial parts (e, g. facial action units), learn discriminative features, and learn inter-class correlation. However, few related works pay attention to the robustness of the attention module itself. Through experiments, we found neural attention maps initialized with different feature maps yield diverse representations when learning to attend the identical Region of Interest (ROI). In other words, similar to general feature learning, the representational quality of attention maps also greatly affects the performance of a model, which means unconstrained attention learning has lots of randomnesses. This uncertainty lets conventional attention learning fall into sub-optimal. In this paper, we propose a compact model to enhance the representational and focusing power of neural attention maps and learn the 'inter-attention' correlation for refined attention maps, which we term the 'Self-Diversified Multi-Channel Attention Network (SMA-Net)'. The proposed method is evaluated on two benchmark databases (BP4D and DISFA) for AU detection and four databases (CK+, MMI, BU-3DFE, and BP4D+) for facial expression recognition. It achieves superior performance compared to the state-of-the-art methods.
Recommended Citation
X. Li et al., "Your 'Attention' Deserves Attention: A Self-Diversified Multi-Channel Attention For Facial Action Analysis," Proceedings - 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Institute of Electrical and Electronics Engineers, Jan 2021.
The definitive version is available at https://doi.org/10.1109/FG52635.2021.9666970
Department(s)
Computer Science
International Standard Book Number (ISBN)
978-166543176-7
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.
Publication Date
01 Jan 2021
Comments
National Science Foundation, Grant CNS-1629898