Building Goal-Oriented Dialogue Systems with Situated Visual Context

Authors

  • Sanchit Agarwal Amazon Alexa AI
  • Jan Jezabek Hedgefrog Software
  • Arijit Biswas Amazon Alexa AI
  • Emre Barut Amazon Alexa AI
  • Bill Gao Amazon Alexa AI
  • Tagyoung Chung Amazon Alexa AI

DOI:

https://doi.org/10.1609/aaai.v36i11.21710

Keywords:

Conversational AI, Natural Language Processing, Natural Language Understanding, Multimodal Machine Learning, Visually Grounded Dialogue

Abstract

Goal-oriented dialogue agents can comfortably utilize the conversational context and understand its users' goals. However, in visually driven user experiences, these conversational agents are also required to make sense of the screen context in order to provide a proper interactive experience. In this paper, we propose a novel multimodal conversational framework where the dialogue agent's next action and their arguments are derived jointly conditioned both on the conversational and the visual context. We demonstrate the proposed approach via a prototypical furniture shopping experience for a multimodal virtual assistant.

Downloads

Published

2022-06-28

How to Cite

Agarwal, S., Jezabek, J., Biswas, A., Barut, E., Gao, B., & Chung, T. (2022). Building Goal-Oriented Dialogue Systems with Situated Visual Context. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13149-13151. https://doi.org/10.1609/aaai.v36i11.21710