VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding

Authors

  • Yongxin Guo School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, P.R. China
  • Jingyu Liu Tencent PCG
  • Mingda Li Tencent PCG
  • Dingxin Cheng Shandong University
  • Xiaoying Tang School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, P.R. China The Shenzhen Future Network of Intelligence Institute, CUHK-Shenzhen, 518172, P.R. China The Guangdong Provincial Key Laboratory of Future Networks of Intelligence, CUHK-Shenzhen, 518172, P.R. China
  • Dianbo Sui Harbin Institute of Technology
  • Qingbin Liu Tencent PCG
  • Xi Chen Tencent PCG
  • Kevin Zhao Tencent PCG

DOI:

https://doi.org/10.1609/aaai.v39i3.32341

Abstract

Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing. Unlike traditional task-specific models, Video Large Language Models (video LLMs) can handle multiple tasks concurrently in a zero-shot manner. Consequently, exploring the application of video LLMs for VTG tasks has become a burgeoning research area. However, despite considerable advancements in video content understanding, video LLMs often struggle to accurately pinpoint timestamps within videos, limiting their effectiveness in VTG tasks. To address this, we introduce VTG-LLM, a model designed to enhance video LLMs' timestamp localization abilities. Our approach includes: (1) effectively integrating timestamp knowledge into visual tokens; (2) incorporating absolute-time tokens to manage timestamp knowledge without concept shifts; and (3) introducing a lightweight, high-performance, slot-based token compression technique designed to accommodate the demands of a large number of frames to be sampled for VTG tasks. Additionally, we present VTG-IT-120K, a collection of publicly available VTG datasets that we have re-annotated to improve upon low-quality annotations. Our comprehensive experiments demonstrate the superior performance of VTG-LLM in comparison to other video LLM methods across a variety of VTG tasks.

Downloads

Published

2025-04-11

How to Cite

Guo, Y., Liu, J., Li, M., Cheng, D., Tang, X., Sui, D., Liu, Q., Chen, X., & Zhao, K. (2025). VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 3302-3310. https://doi.org/10.1609/aaai.v39i3.32341

Issue

Section

AAAI Technical Track on Computer Vision II