Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA


  • Badri Patro Indian Institute of Technology
  • Anupriy Indian Institute of Technology
  • Vinay Namboodiri Indian Institute of Technology




In this paper, we aim to obtain improved attention for a visual question answering (VQA) task. It is challenging to provide supervision for attention. An observation we make is that visual explanations as obtained through class activation mappings (specifically Grad-CAM) that are meant to explain the performance of various networks could form a means of supervision. However, as the distributions of attention maps and that of Grad-CAMs differ, it would not be suitable to directly use these as a form of supervision. Rather, we propose the use of a discriminator that aims to distinguish samples of visual explanation and attention maps. The use of adversarial training of the attention regions as a two-player game between attention and explanation serves to bring the distributions of attention maps and visual explanations closer. Significantly, we observe that providing such a means of supervision also results in attention maps that are more closely related to human attention resulting in a substantial improvement over baseline stacked attention network (SAN) models. It also results in a good improvement in rank correlation metric on the VQA task. This method can also be combined with recent MCB based methods and results in consistent improvement. We also provide comparisons with other means for learning distributions such as based on Correlation Alignment (Coral), Maximum Mean Discrepancy (MMD) and Mean Square Error (MSE) losses and observe that the adversarial loss outperforms the other forms of learning the attention maps. Visualization of the results also confirms our hypothesis that attention maps improve using this form of supervision.




How to Cite

Patro, B., Anupriy, & Namboodiri, V. (2020). Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11848-11855. https://doi.org/10.1609/aaai.v34i07.6858



AAAI Technical Track: Vision