TIME: Temporal-Sensitive Multi-Dimensional Instruction Tuning and Robust Benchmarking for Video-LLMs
DOI:
https://doi.org/10.1609/aaai.v40i12.38002Abstract
Video large language models have achieved remarkable performance in tasks such as video question answering, however, their temporal understanding remains suboptimal. To address this limitation, we curate a dedicated instruction fine-tuning dataset that focuses on enhancing temporal comprehension across five key dimensions. In order to reduce reliance on costly temporal annotations, we introduce a multi-task prompt fine-tuning approach that seamlessly integrates temporal-sensitive tasks into existing instruction datasets without requiring additional annotations. Furthermore, we develop a novel benchmark for temporal-sensitive video understanding that not only fills the gaps in dimension coverage left by existing benchmarks but also rigorously filters out potential shortcuts, ensuring a more accurate evaluation. Extensive experimental results demonstrate that our approach significantly enhances the temporal understanding of video-LLMs while avoiding reliance on shortcuts.Downloads
Published
2026-03-14
How to Cite
Wang, Y., Liu, M., Liu, W., Song, X., Wen, B., Yang, F., Gao, T., Zhang, D., Zhou, G., & Nie, L. (2026). TIME: Temporal-Sensitive Multi-Dimensional Instruction Tuning and Robust Benchmarking for Video-LLMs. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 10323-10331. https://doi.org/10.1609/aaai.v40i12.38002
Issue
Section
AAAI Technical Track on Computer Vision IX