Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network

Authors

  • Xiang Fang Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology Key Laboratory of Data Protection and Intelligent Management (Sichuan University), Ministry of Education
  • Wanlong Fang Nanyang Technological University, Singapore
  • Changshuo Wang Nanyang Technological University, Singapore
  • Daizong Liu Peking University
  • Keke Tang Guangzhou University
  • Jianfeng Dong Zhejiang Gongshang University
  • Pan Zhou Huazhong University of Science and Technology
  • Beibei Li Sichuan University

DOI:

https://doi.org/10.1609/aaai.v39i3.32298

Abstract

Given some video-query pairs with untrimmed videos and sentence queries, temporal sentence grounding (TSG) aims to locate query-relevant segments in these videos. Although previous respectable TSG methods have achieved remarkable success, they train each video-query pair separately and ignore the relationship between different pairs. To this end, in this paper, we pose a brand-new setting: Multi-Pair TSG, which aims to co-train these pairs. We propose a novel video-query co-training approach, Multi-Thread Knowledge Transfer Network, to locate a variety of video-query pairs effectively and efficiently. Firstly, we mine the spatial and temporal semantics across different queries to cooperate with each other. To learn intra- and inter-modal representations simultaneously, we design a cross-modal contrast module to explore the semantic consistency by a self-supervised strategy. To fully align visual and textual representations between different pairs, we design a prototype alignment strategy to 1) match object prototypes and phrase prototypes for spatial alignment, and 2) align activity prototypes and sentence prototypes for temporal alignment. Finally, we develop an adaptive negative selection module to adaptively generate a threshold for cross-modal matching. Extensive experiments show the effectiveness and efficiency of our proposed method.

Downloads

Published

2025-04-11

How to Cite

Fang, X., Fang, W., Wang, C., Liu, D., Tang, K., Dong, J., … Li, B. (2025). Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2915–2923. https://doi.org/10.1609/aaai.v39i3.32298

Issue

Section

AAAI Technical Track on Computer Vision II