Exploring Human-Like Attention Supervision in Visual Question Answering

Authors

  • Tingting Qiao Zhejiang University
  • Jianfeng Dong Zhejiang University
  • Duanqing Xu Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v32i1.12272

Keywords:

Computer Vision, Visual Question Answering

Abstract

Attention mechanisms have been widely applied in the Visual Question Answering (VQA) task, as they help to focus on the area-of-interest of both visual and textual information. To answer the questions correctly, the model needs to selectively target different areas of an image, which suggests that an attention-based model may benefit from an explicit attention supervision. In this work, we aim to address the problem of adding attention supervision to VQA models. Since there is a lack of human attention data, we first propose a Human Attention Network (HAN) to generate human-like attention maps, training on a recently released dataset called Human ATtention Dataset (VQA-HAT). Then, we apply the pre-trained HAN on the VQA v2.0 dataset to automatically produce the human-like attention maps for all image-question pairs. The generated human-like attention map dataset for the VQA v2.0 dataset is named as Human-Like ATtention (HLAT) dataset. Finally, we apply human-like attention supervision to an attention-based VQA model. The experiments show that adding human-like supervision yields a more accurate attention together with a better performance, showing a promising future for human-like attention supervision in VQA.

Downloads

Published

2018-04-27

How to Cite

Qiao, T., Dong, J., & Xu, D. (2018). Exploring Human-Like Attention Supervision in Visual Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12272