Learning Pose Grammar to Encode Human Body Configuration for 3D Pose Estimation

Authors

  • Hao-Shu Fang Shanghai Jiao Tong University
  • Yuanlu Xu University of California, Los Angeles
  • Wenguan Wang Beijing Institute of Technology
  • Xiaobai Liu San Diego State University
  • Song-Chun Zhu University of California, Los Angeles

Keywords:

3D Pose Estimation, Deep Grammar Network, Pose Grammar, Deep Neural Network

Abstract

In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation. Our model directly takes 2D pose as input and learns a generalized 2D-3D mapping function. The proposed model consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNN) on the top to explicitly incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry, motor coordination). The proposed model thus enforces high-level constraints over human poses. In learning, we develop a pose sample simulator to augment training samples in virtual camera views, which further improves our model generalizability. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter difficulty under such setting while our method can well handle such challenges.

Downloads

Published

2018-04-27

How to Cite

Fang, H.-S., Xu, Y., Wang, W., Liu, X., & Zhu, S.-C. (2018). Learning Pose Grammar to Encode Human Body Configuration for 3D Pose Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/12270