Unsupervised Alignment of Natural Language Instructions with Video Segments

Authors

  • Iftekhar Naim University of Rochester
  • Young Song University of Rochester
  • Qiguang Liu University of Rochester
  • Henry Kautz University of Rochester
  • Jiebo Luo University of Rochester
  • Daniel Gildea University of Rochester

DOI:

https://doi.org/10.1609/aaai.v28i1.8939

Keywords:

Grounded Language Acquisition, Natural Language Processing, Language and Vision, Video Alignment, IBM models, HMM

Abstract

We propose an unsupervised learning algorithm for automatically inferring the mappings between English nouns and corresponding video objects. Given a sequence of natural language instructions and an unaligned video recording, we simultaneously align each instruction to its corresponding video segment, and also align nouns in each instruction to their corresponding objects in video. While existing grounded language acquisition algorithms rely on pre-aligned supervised data (each sentence paired with corresponding image frame or video segment), our algorithm aims to automatically infer the alignment from the temporal structure of the video and parallel text instructions. We propose two generative models that are closely related to the HMM and IBM 1 word alignment models used in statistical machine translation. We evaluate our algorithm on videos of biological experiments performed in wetlabs, and demonstrate its capability of aligning video segments to text instructions and matching video objects to nouns in the absence of any direct supervision.

Downloads

Published

2014-06-21

How to Cite

Naim, I., Song, Y., Liu, Q., Kautz, H., Luo, J., & Gildea, D. (2014). Unsupervised Alignment of Natural Language Instructions with Video Segments. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8939