TY - JOUR AU - Yu, Haonan AU - Siskind, Jeffrey PY - 2015/03/04 Y2 - 2024/03/28 TI - Learning to Describe Video with Weak Supervision by Exploiting Negative Sentential Information JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 29 IS - 1 SE - AAAI Technical Track: Vision DO - 10.1609/aaai.v29i1.9790 UR - https://ojs.aaai.org/index.php/AAAI/article/view/9790 SP - AB - <p> Most previous work on video description trains individualparts of speech independently. It is more appealing from a linguistic point of view, for word models for all parts of speech to be learned simultaneously from whole sentences, a hypothesis suggested by some linguists for child language acquisition. In this paper, we learn to describe video by discriminatively training positive sentential labels against negative ones in a weakly supervised fashion: the meaning representations (i.e., HMMs) of individual words in these labels are learned from whole sentences without any correspondence annotation of what those words denote in the video. Textual descriptions are then generated for new video using trained word models. </p> ER -