Exploiting Auxiliary Caption for Video Grounding

Authors

  • Hongxiang Li School of Electronic and Computer Engineering, Peking University
  • Meng Cao International Digital Economy Academy (IDEA)
  • Xuxin Cheng School of Electronic and Computer Engineering, Peking University
  • Yaowei Li School of Electronic and Computer Engineering, Peking University
  • Zhihong Zhu School of Electronic and Computer Engineering, Peking University
  • Yuexian Zou School of Electronic and Computer Engineering, Peking University

DOI:

https://doi.org/10.1609/aaai.v38i17.29812

Keywords:

NLP: Language Grounding & Multi-modal NLP, CV: Language and Vision, CV: Multi-modal Vision, CV: Video Understanding & Activity Analysis

Abstract

Video grounding aims to locate a moment of interest matching the given query sentence from an untrimmed video. Previous works ignore the sparsity dilemma in video annotations, which fails to provide the context information between potential events and query sentences in the dataset. In this paper, we contend that exploiting easily available captions which describe general actions, i.e., auxiliary captions defined in our paper, will significantly boost the performance. To this end, we propose an Auxiliary Caption Network (ACNet) for video grounding. Specifically, we first introduce dense video captioning to generate dense captions and then obtain auxiliary captions by Non-Auxiliary Caption Suppression (NACS). To capture the potential information in auxiliary captions, we propose Caption Guided Attention (CGA) project the semantic relations between auxiliary captions and query sentences into temporal space and fuse them into visual representations. Considering the gap between auxiliary captions and ground truth, we propose Asymmetric Cross-modal Contrastive Learning (ACCL) for constructing more negative pairs to maximize cross-modal mutual information. Extensive experiments on three public datasets (i.e., ActivityNet Captions, TACoS and ActivityNet-CG) demonstrate that our method significantly outperforms state-of-the-art methods.

Published

2024-03-24

How to Cite

Li, H., Cao, M., Cheng, X., Li, Y., Zhu, Z., & Zou, Y. (2024). Exploiting Auxiliary Caption for Video Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18508-18516. https://doi.org/10.1609/aaai.v38i17.29812

Issue

Section

AAAI Technical Track on Natural Language Processing II