Structured Co-reference Graph Attention for Video-grounded Dialogue

Authors

  • Junyeong Kim Korea Advanced Institute of Science and Technology (KAIST)
  • Sunjae Yoon Korea Advanced Institute of Science and Technology (KAIST)
  • Dahyun Kim Korea Advanced Institute of Science and Technology (KAIST)
  • Chang D. Yoo Korea Advanced Institute of Science and Technology (KAIST)

DOI:

https://doi.org/10.1609/aaai.v35i2.16273

Keywords:

Language and Vision, Multi-modal Vision, Conversational AI/Dialog Systems, Applications

Abstract

A video-grounded dialogue system referred to as the Structured Co-reference Graph Attention (SCGA) is presented for decoding the answer sequence to a question regarding a given video while keeping track of the dialogue context. Although recent efforts have made great strides in improving the quality of the response, performance is still far from satisfactory. The two main challenging issues are as follows: (1) how to deduce co-reference among multiple modalities and (2) how to reason on the rich underlying semantic structure of video with complex spatial and temporal dynamics. To this end, SCGA is based on (1) Structured Co-reference Resolver that performs dereferencing via building a structured graph over multiple modalities, (2) Spatio-temporal Video Reasoner that captures local-to-global dynamics of video via gradually neighboring graph attention. SCGA makes use of pointer network to dynamically replicate parts of the question for decoding the answer sequence. The validity of the proposed SCGA is demonstrated on AVSD@DSTC7 and AVSD@DSTC8 datasets, a challenging video-grounded dialogue benchmarks, and TVQA dataset, a large-scale videoQA benchmark. Our empirical results show that SCGA outperforms other state-of-the-art dialogue systems on both benchmarks, while extensive ablation study and qualitative analysis reveal performance gain and improved interpretability.

Downloads

Published

2021-05-18

How to Cite

Kim, J., Yoon, S., Kim, D., & Yoo, C. D. (2021). Structured Co-reference Graph Attention for Video-grounded Dialogue. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1789-1797. https://doi.org/10.1609/aaai.v35i2.16273

Issue

Section

AAAI Technical Track on Computer Vision I