(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering

Authors

  • Anoop Cherian Mitsubishi Electric Research Laboratories, Cambridge, MA
  • Chiori Hori Mitsubishi Electric Research Laboratories, Cambridge, MA
  • Tim K. Marks Mitsubishi Electric Research Laboratories, Cambridge, MA
  • Jonathan Le Roux Mitsubishi Electric Research Laboratories, Cambridge, MA

DOI:

https://doi.org/10.1609/aaai.v36i1.19922

Keywords:

Computer Vision (CV), Speech & Natural Language Processing (SNLP), Machine Learning (ML), Knowledge Representation And Reasoning (KRR)

Abstract

Spatio-temporal scene-graph approaches to video-based reasoning tasks, such as video question-answering (QA), typically construct such graphs for every video frame. These approaches often ignore the fact that videos are essentially sequences of 2D ``views'' of events happening in a 3D space, and that the semantics of the 3D scene can thus be carried over from frame to frame. Leveraging this insight, we propose a (2.5+1)D scene graph representation to better capture the spatio-temporal information flows inside the videos. Specifically, we first create a 2.5D (pseudo-3D) scene graph by transforming every 2D frame to have an inferred 3D structure using an off-the-shelf 2D-to-3D transformation module, following which we register the video frames into a shared (2.5+1)D spatio-temporal space and ground each 2D scene graph within it. Such a (2.5+1)D graph is then segregated into a static sub-graph and a dynamic sub-graph, corresponding to whether the objects within them usually move in the world. The nodes in the dynamic graph are enriched with motion features capturing their interactions with other graph nodes. Next, for the video QA task, we present a novel transformer-based reasoning pipeline that embeds the (2.5+1)D graph into a spatio-temporal hierarchical latent space, where the sub-graphs and their interactions are captured at varied granularity. To demonstrate the effectiveness of our approach, we present experiments on the NExT-QA and AVSD-QA datasets. Our results show that our proposed (2.5+1)D representation leads to faster training and inference, while our hierarchical model showcases superior performance on the video QA task versus the state of the art.

Downloads

Published

2022-06-28

How to Cite

Cherian, A., Hori, C., Marks, T. K., & Le Roux, J. (2022). (2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 444-453. https://doi.org/10.1609/aaai.v36i1.19922

Issue

Section

AAAI Technical Track on Computer Vision I