TC-LLaVA: Rethinking the Transfer of LLava from Image to Video Understanding with Temporal Considerations

Authors

  • Mingze Gao Hong Kong University of Science and Technology (Guangzhou) Tencent PCG Hong Kong University of Science and Technology
  • Jingyu Liu Tencent PCG
  • Mingda Li Tencent PCG
  • Jiangtao Xie Dalian University of Technology
  • Qingbin Liu Tencent PCG
  • Kevin Zhao Tencent PCG
  • Xi Chen Tencent PCG
  • Hui Xiong Hong Kong University of Science and Technology (Guangzhou) Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v39i3.32317

Abstract

Multimodal Large Language Models (MLLMs) have significantly improved performance across various image-language applications. Recently, there has been a growing interest in adapting image pre-trained MLLMs for video-related tasks. However, most efforts concentrate on enhancing the vision encoder and projector components, while the core part, Large Language Models (LLMs), remains comparatively under-explored. In this paper, we propose two strategies to enhance the model's capability in video understanding tasks by improving inter-layer attention computation in LLMs. Specifically, the first approach focuses on the enhancement of Rotary Position Embedding (RoPE) with Temporal-Aware Dual RoPE, which introduces temporal position information to strengthen the MLLM's temporal modeling capabilities while preserving the relative position relationships of both visual and text tokens. The second approach involves enhancing the Attention Mask with the Frame-wise Block Causal Attention Mask, a simple yet effective method that broadens visual token interactions within and across video frames while maintaining the causal inference mechanism. Based on these proposed methods, we adapt LLaVA for video understanding tasks, naming it Temporal-Considered LLaVA (TC-LLaVA). Our TC-LLaVA achieves new state-of-the-art performance across various video understanding benchmarks with only supervised fine-tuning (SFT) on video-related datasets.

Downloads

Published

2025-04-11

How to Cite

Gao, M., Liu, J., Li, M., Xie, J., Liu, Q., Zhao, K., … Xiong, H. (2025). TC-LLaVA: Rethinking the Transfer of LLava from Image to Video Understanding with Temporal Considerations. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 3086–3094. https://doi.org/10.1609/aaai.v39i3.32317

Issue

Section

AAAI Technical Track on Computer Vision II