Learning to Describe Video with Weak Supervision by Exploiting Negative Sentential Information

Authors

  • Haonan Yu Purdue University
  • Jeffrey Siskind Purdue University

DOI:

https://doi.org/10.1609/aaai.v29i1.9790

Keywords:

language acquisition, Hidden Markov Model, video description

Abstract

Most previous work on video description trains individualparts of speech independently. It is more appealing from a linguistic point of view, for word models for all parts of speech to be learned simultaneously from whole sentences, a hypothesis suggested by some linguists for child language acquisition. In this paper, we learn to describe video by discriminatively training positive sentential labels against negative ones in a weakly supervised fashion: the meaning representations (i.e., HMMs) of individual words in these labels are learned from whole sentences without any correspondence annotation of what those words denote in the video. Textual descriptions are then generated for new video using trained word models.

Downloads

Published

2015-03-04

How to Cite

Yu, H., & Siskind, J. (2015). Learning to Describe Video with Weak Supervision by Exploiting Negative Sentential Information. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9790