Open Domain Dialogue Generation with Latent Images

Authors

  • Ze Yang Beihang University
  • Wei Wu Meituan
  • Huang Hu Microsoft
  • Can Xu Microsoft
  • Wei Wang China Resources Group
  • Zhoujun Li Beihang University

DOI:

https://doi.org/10.1609/aaai.v35i16.17675

Keywords:

Conversational AI/Dialog Systems, Language Grounding & Multi-modal NLP, Generation

Abstract

We consider grounding open domain dialogues with images. Existing work assumes that both an image and a textual context are available, but image-grounded dialogues by nature are more difficult to obtain than textual dialogues. Thus, we propose learning a response generation model with both image-grounded dialogues and textual dialogues by assuming that the visual scene information at the time of a conversation can be represented by an image, and trying to recover the latent images of the textual dialogues through text-to-image generation techniques. The likelihood of the two types of dialogues is then formulated by a response generator and an image reconstructor that are learned within a conditional variational auto-encoding framework. Empirical studies are conducted in both image-grounded conversation and text-based conversation. In the first scenario, image-grounded dialogues, especially under a low-resource setting, can be effectively augmented by textual dialogues with latent images; while in the second scenario, latent images can enrich the content of responses and at the same time keep them relevant to contexts.

Downloads

Published

2021-05-18

How to Cite

Yang, Z., Wu, W., Hu, H., Xu, C., Wang, W., & Li, Z. (2021). Open Domain Dialogue Generation with Latent Images. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14239-14247. https://doi.org/10.1609/aaai.v35i16.17675

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III