Causality Matters: How Temporal Information Emerges in Video Language Models

Authors

  • Yumeng Shi Nanyang Technological University
  • Quanyu Long Nanyang Technological University
  • Yin Wu Nanyang Technological University
  • Wenya Wang Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v40i11.37856

Abstract

Video language models (VideoLMs) have made significant progress in multimodal understanding. However, temporal understanding, which involves identifying event order, duration, and relationships across time, still remains a core challenge. Prior works emphasize positional encodings (PEs) as a key mechanism for encoding temporal structure. Surprisingly, we find that removing or modifying PEs in video inputs yields minimal degradation in the performance of temporal understanding. In contrast, reversing the frame sequence while preserving the original PEs causes a substantial drop. To explain this behavior, we conduct substantial analysis experiments to trace how temporal information is integrated within the model. We uncover a causal information pathway: temporal cues are progressively synthesized through inter-frame attention, aggregated in the final frame, and subsequently integrated into the query tokens. This emergent mechanism shows that temporal reasoning emerges from inter-visual token interactions under the constraints of causal attention, which implicitly encodes temporal structure. Based on these insights, we propose two efficiency-oriented strategies: staged cross-modal attention and a temporal exit mechanism for early token truncation. Experiments on two benchmarks validate the effectiveness of both approaches.

Downloads

Published

2026-03-14

How to Cite

Shi, Y., Long, Q., Wu, Y., & Wang, W. (2026). Causality Matters: How Temporal Information Emerges in Video Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 9006–9014. https://doi.org/10.1609/aaai.v40i11.37856

Issue

Section

AAAI Technical Track on Computer Vision VIII