Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition


  • Guanbin Li Sun Yat-sen University
  • Xin Zhu Sun Yat-sen University
  • Yirui Zeng Sun Yat-sen University
  • Qing Wang Sun Yat-sen University
  • Liang Lin Sun Yat-sen University



Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.




How to Cite

Li, G., Zhu, X., Zeng, Y., Wang, Q., & Lin, L. (2019). Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8594-8601.



AAAI Technical Track: Vision