Learning Procedural-Aware Video Representations Through State-Grounded Hierarchy Unfolding

Authors

  • Jinghan Zhao State Key Laboratory of VR Technology and Systems, School of CSE, Beihang University
  • Yifei Huang The University of Tokyo, Japan
  • Feng Lu State Key Laboratory of VR Technology and Systems, School of CSE, Beihang University

DOI:

https://doi.org/10.1609/aaai.v40i16.38318

Abstract

Learning procedural-aware video representations is a key step towards building agents that can reason about and execute complex tasks. Existing methods typically address this problem by aligning visual content with textual descriptions at the task and step levels to inject procedural semantics into video representations. However, due to their high level of abstraction, "task" and "step" descriptions fail to form a robust alignment with the concrete, observable details in visual data. To address this, we introduce "states", i.e., textual snapshots of object configurations, as a visually-grounded semantic layer that anchors abstract procedures to what a model can actually see. We formalize this insight in a novel Task-Step-State (TSS) framework, where tasks are achieved via steps that drive transitions between observable states. To enforce this structure, we propose a progressive pre-training strategy that unfolds the TSS hierarchy, forcing the model to first ground representations in states before associating them with steps and, ultimately, high-level tasks. Extensive experiments on the COIN and CrossTask datasets show that our method outperforms baseline models on multiple downstream tasks, including task recognition, step recognition, and next step prediction. Ablation studies show that introducing state supervision is a key driver of performance gains across all tasks. Additionally, our progressive pretraining strategy proves more effective than standard joint training, as it better enforces the intended hierarchical structure.

Downloads

Published

2026-03-14

How to Cite

Zhao, J., Huang, Y., & Lu, F. (2026). Learning Procedural-Aware Video Representations Through State-Grounded Hierarchy Unfolding. Proceedings of the AAAI Conference on Artificial Intelligence, 40(16), 13172-13180. https://doi.org/10.1609/aaai.v40i16.38318

Issue

Section

AAAI Technical Track on Computer Vision XIII