Towards Automatic Learning of Procedures From Web Instructional Videos

Authors

  • Luowei Zhou Robotics Institute, University of Michigan
  • Chenliang Xu University of Rochester
  • Jason Corso University of Michigan

DOI:

https://doi.org/10.1609/aaai.v32i1.12342

Keywords:

Deep Learning, Computer Vision, Artificial Intelligence, Video Understanding, Language and Vision, YouCook2 Dataset

Abstract

The potential for agents, whether embodied or software, to learn by observing other agents performing procedures involving objects and actions is rich. Current research on automatic procedure learning heavily relies on action labels or video subtitles, even during the evaluation phase, which makes them infeasible in real-world scenarios. This leads to our question: can the human-consensus structure of a procedure be learned from a large set of long, unconstrained videos (e.g., instructional videos from YouTube) with only visual evidence? To answer this question, we introduce the problem of procedure segmentation---to segment a video procedure into category-independent procedure segments. Given that no large-scale dataset is available for this problem, we collect a large-scale procedure segmentation dataset with procedure segments temporally localized and described; we use cooking videos and name the dataset YouCook2. We propose a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments. The generated segments can be used as pre-processing for other tasks, such as dense video captioning and event parsing. We show in our experiments that the proposed model outperforms competitive baselines in procedure segmentation.

Downloads

Published

2018-04-27

How to Cite

Zhou, L., Xu, C., & Corso, J. (2018). Towards Automatic Learning of Procedures From Web Instructional Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12342