Differential Networks for Visual Question Answering

Authors

  • Chenfei Wu Beijing University of Posts and Telecommunications
  • Jinlai Liu Beijing University of Posts and Telecommunications
  • Xiaojie Wang Beijing University of Posts and Telecommunications
  • Ruifan Li Beijing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v33i01.33018997

Abstract

The task of Visual Question Answering (VQA) has emerged in recent years for its potential applications. To address the VQA task, the model should fuse feature elements from both images and questions efficiently. Existing models fuse image feature element vi and question feature element qi directly, such as an element product viqi. Those solutions largely ignore the following two key points: 1) Whether vi and qi are in the same space. 2) How to reduce the observation noises in vi and qi. We argue that two differences between those two feature elements themselves, like (vivj) and (qiqj), are more probably in the same space. And the difference operation would be beneficial to reduce observation noise. To achieve this, we first propose Differential Networks (DN), a novel plug-and-play module which enables differences between pair-wise feature elements. With the tool of DN, we then propose DN based Fusion (DF), a novel model for VQA task. We achieve state-of-the-art results on four publicly available datasets. Ablation studies also show the effectiveness of difference operations in DF model.

Downloads

Published

2019-07-17

How to Cite

Wu, C., Liu, J., Wang, X., & Li, R. (2019). Differential Networks for Visual Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8997-9004. https://doi.org/10.1609/aaai.v33i01.33018997

Issue

Section

AAAI Technical Track: Vision