Structured Two-Stream Attention Network for Video Question Answering

Authors

  • Lianli Gao University of Electronic Science and Technology of China
  • Pengpeng Zeng University of Electronic Science and Technology of China
  • Jingkuan Song University of Electronic Science and Technology of China
  • Yuan-Fang Li Monash University
  • Wu Liu AI Research of JD.com
  • Tao Mei AI Research of JD.com
  • Heng Tao Shen University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v33i01.33016391

Abstract

To date, visual question answering (VQA) (i.e., image QA and video QA) is still a holy grail in vision and language understanding, especially for video QA. Compared with image QA that focuses primarily on understanding the associations between image region-level details and corresponding questions, video QA requires a model to jointly reason across both spatial and long-range temporal structures of a video as well as text to provide an accurate answer. In this paper, we specifically tackle the problem of video QA by proposing a Structured Two-stream Attention network, namely STA, to answer a free-form or open-ended natural language question about the content of a given video. First, we infer rich longrange temporal structures in videos using our structured segment component and encode text features. Then, our structured two-stream attention component simultaneously localizes important visual instance, reduces the influence of background video and focuses on the relevant text. Finally, the structured two-stream fusion component incorporates different segments of query and video aware context representation and infers the answers. Experiments on the large-scale video QA dataset TGIF-QA show that our proposed method significantly surpasses the best counterpart (i.e., with one representation for the video input) by 13.0%, 13.5%, 11.0% and 0.3 for Action, Trans., TrameQA and Count tasks. It also outperforms the best competitor (i.e., with two representations) on the Action, Trans., TrameQA tasks by 4.1%, 4.7%, and 5.1%.

Downloads

Published

2019-07-17

How to Cite

Gao, L., Zeng, P., Song, J., Li, Y.-F., Liu, W., Mei, T., & Shen, H. T. (2019). Structured Two-Stream Attention Network for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6391-6398. https://doi.org/10.1609/aaai.v33i01.33016391

Issue

Section

AAAI Technical Track: Natural Language Processing