STAIR: Spatial-Temporal Reasoning with Auditable Intermediate Results for Video Question Answering

Authors

  • Yueqian Wang Wangxuan Institute of Computer Technology, Peking University
  • Yuxuan Wang Beijing Institute for General Artificial Intelligence National Key Laboratory of General Artificial Intelligence
  • Kai Chen School of Economics, Peking University
  • Dongyan Zhao Wangxuan Institute of Computer Technology, Peking University National Key Laboratory of General Artificial Intelligence

DOI:

https://doi.org/10.1609/aaai.v38i17.29890

Keywords:

NLP: Question Answering, CV: Video Understanding & Activity Analysis

Abstract

Recently we have witnessed the rapid development of video question answering models. However, most models can only handle simple videos in terms of temporal reasoning, and their performance tends to drop when answering temporal-reasoning questions on long and informative videos. To tackle this problem we propose STAIR, a Spatial-Temporal Reasoning model with Auditable Intermediate Results for video question answering. STAIR is a neural module network, which contains a program generator to decompose a given question into a hierarchical combination of several sub-tasks, and a set of lightweight neural modules to complete each of these sub-tasks. Though neural module networks are already widely studied on image-text tasks, applying them to videos is a non-trivial task, as reasoning on videos requires different abilities. In this paper, we define a set of basic video-text sub-tasks for video question answering and design a set of lightweight modules to complete them. Different from most prior works, modules of STAIR return intermediate outputs specific to their intentions instead of always returning attention maps, which makes it easier to interpret and collaborate with pre-trained models. We also introduce intermediate supervision to make these intermediate outputs more accurate. We conduct extensive experiments on several video question answering datasets under various settings to show STAIR's performance, explainability, compatibility with pre-trained models, and applicability when program annotations are not available. Code: https://github.com/yellow-binary-tree/STAIR

Published

2024-03-24

How to Cite

Wang, Y., Wang, Y., Chen, K. ., & Zhao, D. (2024). STAIR: Spatial-Temporal Reasoning with Auditable Intermediate Results for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19215-19223. https://doi.org/10.1609/aaai.v38i17.29890

Issue

Section

AAAI Technical Track on Natural Language Processing II