STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training

Authors

  • Weihong Zhong Harbin Institute of Technology
  • Mao Zheng Tencent MLPD
  • Duyu Tang Independent Researcher
  • Xuan Luo Tencent MLPD
  • Heng Gong Harbin Institute of Technology
  • Xiaocheng Feng Harbin Institute of Technology Peng Cheng Laboratory
  • Bing Qin Harbin Institute of Technology Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v37i3.25483

Keywords:

CV: Language and Vision

Abstract

Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored. In this work, we propose STOA-VLP, a pre-training framework that jointly models object and action information across spatial and temporal dimensions. More specifically, the model regards object trajectories across frames and multiple action features from the video as fine-grained features. Besides, We design two auxiliary tasks to better incorporate both kinds of information into the pre-training process of the video-language model. The first is the dynamic object-text alignment task, which builds a better connection between object trajectories and the relevant noun tokens. The second is the spatial-temporal action set prediction, which guides the model to generate consistent action features by predicting actions found in the text. Extensive experiments on three downstream tasks (video captioning, text-video retrieval, and video question answering) demonstrate the effectiveness of our proposed STOA-VLP (e.g. 3.7 Rouge-L improvements on MSR-VTT video captioning benchmark, 2.9% accuracy improvements on MSVD video question answering benchmark, compared to previous approaches).

Downloads

Published

2023-06-26

How to Cite

Zhong, W., Zheng, M., Tang, D., Luo, X., Gong, H., Feng, X., & Qin, B. (2023). STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3715–3723. https://doi.org/10.1609/aaai.v37i3.25483

Issue

Section

AAAI Technical Track on Computer Vision III