Abstract
Expression recognition holds great promise for applications such as content recommendation and mental healthcare by accurately detecting users’ emotional states. Traditional methods often rely on cameras or wearable sensors, which raise privacy concerns and add extra device burdens. In addition, existing acoustic-based methods struggle to maintain satisfactory performance when there is a distribution shift between the training dataset and the inference dataset. In this paper, we introduce FacER+, an active acoustic facial expression recognition system, which eliminates the requirement for external microphone arrays. FacER+ extracts facial expression features by analyzing the echoes of near-ultrasound signals emitted between the 3D facial contour and the earpiece speaker on a smartphone. This approach not only reduces background noise but also enables the identification of different expressions from various users with minimal training data. We develop a contrastive external attention-based model to consistently learn expression features across different users, reducing the distribution differences. Extensive experiments involving 20 volunteers, both with and without masks, demonstrate that FacER+ can accurately recognize six common facial expressions with over 90% accuracy in diverse, user-independent real-life scenarios, surpassing the performance of the leading acoustic sensing methods by 10%. FacER+ offers a robust and practical solution for facial expression recognition. The source code is available at https://github.com/MyRespect/FaceAcousticSensing.
Recommended Citation
G. Wang et al., "Decoding Emotions: Unveiling Facial Expressions through Acoustic Sensing with Contrastive Attention," Transactions on Mobile Computing, vol. 1, no. 1, Institute of Electrical and Electronics Engineers, Dec 2024.
Department(s)
Computer Science
Keywords and Phrases
Acoustic sensing, expression recognition, contrastive learning, attention, domain adaptation, smartphone
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2025 Institute of Electrical and Electronics Engineers, all rights reserved
Publication Date
December 2024
