Structured Co-reference Graph Attention for Video-grounded Dialogue
Keywords:Language and Vision, Multi-modal Vision, Conversational AI/Dialog Systems, Applications
AbstractA video-grounded dialogue system referred to as the Structured Co-reference Graph Attention (SCGA) is presented for decoding the answer sequence to a question regarding a given video while keeping track of the dialogue context. Although recent efforts have made great strides in improving the quality of the response, performance is still far from satisfactory. The two main challenging issues are as follows: (1) how to deduce co-reference among multiple modalities and (2) how to reason on the rich underlying semantic structure of video with complex spatial and temporal dynamics. To this end, SCGA is based on (1) Structured Co-reference Resolver that performs dereferencing via building a structured graph over multiple modalities, (2) Spatio-temporal Video Reasoner that captures local-to-global dynamics of video via gradually neighboring graph attention. SCGA makes use of pointer network to dynamically replicate parts of the question for decoding the answer sequence. The validity of the proposed SCGA is demonstrated on AVSD@DSTC7 and AVSD@DSTC8 datasets, a challenging video-grounded dialogue benchmarks, and TVQA dataset, a large-scale videoQA benchmark. Our empirical results show that SCGA outperforms other state-of-the-art dialogue systems on both benchmarks, while extensive ablation study and qualitative analysis reveal performance gain and improved interpretability.
How to Cite
Kim, J., Yoon, S., Kim, D., & Yoo, C. D. (2021). Structured Co-reference Graph Attention for Video-grounded Dialogue. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1789-1797. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16273
AAAI Technical Track on Computer Vision I