Leveraging Video Descriptions to Learn Video Question Answering

Authors

  • Kuo-Hao Zeng Stanford University and National Tsing Hua University
  • Tseng-Hung Chen National Tsing Hua University
  • Ching-Yao Chuang National Tsing Hua University
  • Yuan-Hong Liao National Tsing Hua University
  • Juan Carlos Niebles Stanford University
  • Min Sun National Tsing Hua University

DOI:

https://doi.org/10.1609/aaai.v31i1.11238

Keywords:

Question Answering, Language and Vision, Deep Learning/Neural Networks

Abstract

We propose a scalable approach to learn video-based question answering (QA): to answer a free-form natural language question about the contents of a video. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended from MN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), and SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.

Downloads

Published

2017-02-12

How to Cite

Zeng, K.-H., Chen, T.-H., Chuang, C.-Y., Liao, Y.-H., Niebles, J. C., & Sun, M. (2017). Leveraging Video Descriptions to Learn Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11238