DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual Dialogue


  • Xiaoze Jiang Chinese Academy of Sciences
  • Jing Yu Chinese Academy of Sciences
  • Zengchang Qin Beihang University
  • Yingying Zhuang Chinese Academy of Sciences
  • Xingxing Zhang Microsoft Research Asia
  • Yue Hu Chinese Academy of Sciences
  • Qi Wu University of Adelaide




Different from Visual Question Answering task that requires to answer only one question about an image, Visual Dialogue involves multiple questions which cover a broad range of visual content that could be related to any objects, relationships or semantics. The key challenge in Visual Dialogue task is thus to learn a more comprehensive and semantic-rich image representation which may have adaptive attentions on the image for variant questions. In this research, we propose a novel model to depict an image from both visual and semantic perspectives. Specifically, the visual view helps capture the appearance-level information, including objects and their relationships, while the semantic view enables the agent to understand high-level visual semantics from the whole image to the local regions. Futhermore, on top of such multi-view image features, we propose a feature selection framework which is able to adaptively capture question-relevant information hierarchically in fine-grained level. The proposed method achieved state-of-the-art results on benchmark Visual Dialogue datasets. More importantly, we can tell which modality (visual or semantic) has more contribution in answering the current question by visualizing the gate values. It gives us insights in understanding of human cognition in Visual Dialogue.




How to Cite

Jiang, X., Yu, J., Qin, Z., Zhuang, Y., Zhang, X., Hu, Y., & Wu, Q. (2020). DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual Dialogue. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11125-11132. https://doi.org/10.1609/aaai.v34i07.6769



AAAI Technical Track: Vision