3D Human Pose Estimation Using Spatio-Temporal Networks with Explicit Occlusion Training

Authors

  • Yu Cheng National University of Singapore
  • Bo Yang Tencent Game AI Research Center
  • Bo Wang Tencent Game AI Research Center
  • Robby T. Tan National University of Singapore, Yale-NUS College

DOI:

https://doi.org/10.1609/aaai.v34i07.6689

Abstract

Estimating 3D poses from a monocular video is still a challenging task, despite the significant progress that has been made in the recent years. Generally, the performance of existing methods drops when the target person is too small/large, or the motion is too fast/slow relative to the scale and speed of the training data. Moreover, to our knowledge, many of these methods are not designed or trained under severe occlusion explicitly, making their performance on handling occlusion compromised. Addressing these problems, we introduce a spatio-temporal network for robust 3D human pose estimation. As humans in videos may appear in different scales and have various motion speeds, we apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multi-stride temporal convolutional networks (TCNs) to estimate 3D joints or keypoints. Furthermore, we design a spatio-temporal discriminator based on body structures as well as limb motions to assess whether the predicted pose forms a valid pose and a valid movement. During training, we explicitly mask out some keypoints to simulate various occlusion cases, from minor to severe occlusion, so that our network can learn better and becomes robust to various degrees of occlusion. As there are limited 3D ground truth data, we further utilize 2D video data to inject a semi-supervised learning capability to our network. Experiments on public data sets validate the effectiveness of our method, and our ablation studies show the strengths of our network's individual submodules.

Downloads

Published

2020-04-03

How to Cite

Cheng, Y., Yang, B., Wang, B., & Tan, R. T. (2020). 3D Human Pose Estimation Using Spatio-Temporal Networks with Explicit Occlusion Training. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10631-10638. https://doi.org/10.1609/aaai.v34i07.6689

Issue

Section

AAAI Technical Track: Vision