Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network

Authors

  • Tao Hu University of Chinese Academy of Sciences
  • Honggang Qi University of Chinese Academy of Sciences
  • Jizheng Xu Microsoft Research Asia, Beijing
  • Qingming Huang University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v32i1.12275

Keywords:

facial landmarks detection, non-linear least squares optimization

Abstract

Cascaded Regression (CR) based methods have been proposed to solve facial landmarks detection problem, which learn a series of descent directions by multiple cascaded regressors separately trained in coarse and fine stages. They outperform the traditional gradient descent based methods in both accuracy and running speed. However, cascaded regression is not robust enough because each regressor's training data comes from the output of previous regressor. Moreover, training multiple regressors requires lots of computing resources, especially for deep learning based methods. In this paper, we develop a Self-Iterative Regression (SIR) framework to improve the model efficiency. Only one self-iterative regressor is trained to learn the descent directions for samples from coarse stages to fine stages, and parameters are iteratively updated by the same regressor. Specifically, we proposed Landmarks-Attention Network (LAN) as our regressor, which concurrently learns features around each landmark and obtains the holistic location increment. By doing so, not only the rest of regressors are removed to simplify the training process, but the number of model parameters is significantly decreased. The experiments demonstrate that with only 3.72M model parameters, our proposed method achieves the state-of-the-art performance.

Downloads

Published

2018-04-27

How to Cite

Hu, T., Qi, H., Xu, J., & Huang, Q. (2018). Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12275