Causal-Inspired Multitask Learning for Video-Based Human Pose Estimation

Authors

  • Haipeng Chen College of Computer Science and Technology, Jilin University, Changchun, China Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
  • Sifan Wu College of Computer Science and Technology, Jilin University, Changchun, China Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
  • Zhigang Wang College of Computer Science and Technology, Zhejiang Gongshang University, Hangzhou, China
  • Yifang Yin Institute for Infocomm Research (I2R), A*STAR, Singapore
  • Yingying Jiao College of Computer Science and Technology, Jilin University, Changchun, China Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
  • Yingda Lyu College of Computer Science and Technology, Jilin University, Changchun, China Public Computer Education and Research Center, Jilin University, Changchun, China
  • Zhenguang Liu The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, Hangzhou, China

DOI:

https://doi.org/10.1609/aaai.v39i2.32202

Abstract

Video-based human pose estimation has long been a fundamental yet challenging problem in computer vision. Previous studies focus on spatio-temporal modeling through the enhancement of architecture design and optimization strategies. However, they overlook the causal relationships in the joints, leading to models that may be overly tailored and thus estimate poorly to challenging scenes. Therefore, adequate causal reasoning capability, coupled with good interpretability of model, are both indispensable and prerequisite for achieving reliable results. In this paper, we pioneer a causal perspective on pose estimation and introduce a causal-inspired multitask learning framework, consisting of two stages. In the first stage, we try to endow the model with causal spatio-temporal modeling ability by introducing two self-supervision auxiliary tasks. Specifically, these auxiliary tasks enable the network to infer challenging keypoints based on observed keypoint information, thereby imbuing causal reasoning capabilities into the model and making it robust to challenging scenes. In the second stage, we argue that not all feature tokens contribute equally to pose estimation. Prioritizing causal (keypoint-relevant) tokens is crucial to achieve reliable results, which could improve the interpretability of the model. To this end, we propose a Token Causal Importance Selection module to identify the causal tokens and non-causal tokens (e.g., background and objects). Additionally, non-causal tokens could provide potentially beneficial cues but may be redundant. We further introduce a non-causal tokens clustering module to merge the similar non-causal tokens. Extensive experiments show that our method outperforms state-of-the-art methods on three large-scale benchmark datasets.

Downloads

Published

2025-04-11

How to Cite

Chen, H., Wu, S., Wang, Z., Yin, Y., Jiao, Y., Lyu, Y., & Liu, Z. (2025). Causal-Inspired Multitask Learning for Video-Based Human Pose Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(2), 2052–2060. https://doi.org/10.1609/aaai.v39i2.32202

Issue

Section

AAAI Technical Track on Computer Vision I