VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning
DOI:
https://doi.org/10.1609/aaai.v39i4.32474Abstract
Despite the advancements of Video Large Language Models (VideoLLMs) in various tasks, they struggle with fine-grained temporal understanding, such as Dense Video Captioning (DVC). DVC is a complicated task of describing all events within a video while also temporally localizing them, which integrates multiple fine-grained tasks, including video segmentation, video captioning, and temporal video grounding. Previous VideoLLMs attempt to solve DVC in a single step, failing to utilize their reasoning capability. Moreover, previous training objectives for VideoLLMs do not fully reflect the evaluation metrics, therefore not providing supervision directly aligned to target tasks. To address such a problem, we propose a novel framework named VidChain comprised of Chain-of-Tasks (CoTasks) and Metric-based Direct Preference Optimization (M-DPO). CoTasks decompose a complex task into a sequence of sub-tasks, allowing VideoLLMs to leverage their reasoning capabilities more effectively. M-DPO aligns a VideoLLM with evaluation metrics, providing fine-grained supervision to each task that is well-aligned with metrics. Applied to two different VideoLLMs, VidChain consistently improves their fine-grained video understanding, thereby outperforming previous VideoLLMs on two different DVC benchmarks and also on the temporal video grounding task.Downloads
Published
2025-04-11
How to Cite
Lee, J. S., Kim, J., Na, J., Park, J., & Kim, H. J. (2025). VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(4), 4499–4507. https://doi.org/10.1609/aaai.v39i4.32474
Issue
Section
AAAI Technical Track on Computer Vision III