Learning Multi-Scale Video-Text Correspondence for Weakly Supervised Temporal Article Gronding

Authors

  • Wenjia Geng Shenzhen International Graduate School,Tsinghua University
  • Yong Liu Shenzhen International Graduate School,Tsinghua University
  • Lei Chen University of Science and Technology Beijing
  • Sujia Wang Shenzhen International Graduate School,Tsinghua University
  • Jie Zhou Department of Automation, Tsinghua University
  • Yansong Tang Shenzhen International Graduate School,Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v38i3.27959

Keywords:

CV: Video Understanding & Activity Analysis, CV: Language and Vision

Abstract

Weakly Supervised temporal Article Grounding (WSAG) is a challenging and practical task in video understanding. Specifically, given a video and a relevant article, whose sentences are at different semantic scales, WSAG aims to localize corresponding video segments for all “groundable” sentences. Compared to other grounding tasks, e.g., localizing one target segment with respect to a given sentence query, WSAG confronts an essential obstacle rooted in the intricate multi-scale information inherent within both textual and visual modalities. Existing methods overlook the modeling and alignment of such structured information present in multi-scale video segments and hierarchical textual content. To this end, we propose a Multi-Scale Video-Text Correspondence Learning (MVTCL) framework, which enhances the grounding performance in complex scenes by modeling multi-scale semantic correspondence both within and between modalities. Specifically, MVTCL initially aggregates video content spanning distinct temporal scales and leverages hierarchical textual relationships in both temporal and semantic dimensions via a semantic calibration module. Then multi-scale contrastive learning module is introduced to generate more discriminative representations by selecting typical contexts and performing inter-video contrastive learning. Through the multi-scale semantic calibration architecture and supervision design, our method achieves new state-of-the-art performance on existing WSAG benchmarks.

Published

2024-03-24

How to Cite

Geng, W., Liu, Y., Chen, L., Wang, S., Zhou, J., & Tang, Y. (2024). Learning Multi-Scale Video-Text Correspondence for Weakly Supervised Temporal Article Gronding. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1896-1904. https://doi.org/10.1609/aaai.v38i3.27959

Issue

Section

AAAI Technical Track on Computer Vision II