Taming Continuous Posteriors for Latent Variational Dialogue Policies

Authors

  • Marin Vlastelica Max Planck Institute for Intelligent Systems
  • Patrick Ernst Amazon
  • Gyuri Szarvas Amazon

DOI:

https://doi.org/10.1609/aaai.v37i11.26602

Keywords:

SNLP: Conversational AI/Dialogue Systems, ML: Reinforcement Learning Algorithms, ML: Deep Generative Models & Autoencoders

Abstract

Utilizing amortized variational inference for latent-action reinforcement learning (RL) has been shown to be an effective approach in Task-oriented Dialogue (ToD) systems for optimizing dialogue success.Until now, categorical posteriors have been argued to be one of the main drivers of performance. In this work we revisit Gaussian variational posteriors for latent-action RL and show that they can yield even better performance than categoricals. We achieve this by introducing an improved variational inference objective for learning continuous representations without auxiliary learning objectives, which streamlines the training procedure. Moreover, we propose ways to regularize the latent dialogue policy, which helps to retain good response coherence. Using continuous latent representations our model achieves state of the art dialogue success rate on the MultiWOZ benchmark, and also compares well to categorical latent methods in response coherence.

Downloads

Published

2023-06-26

How to Cite

Vlastelica, M., Ernst, P., & Szarvas, G. (2023). Taming Continuous Posteriors for Latent Variational Dialogue Policies. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13673-13681. https://doi.org/10.1609/aaai.v37i11.26602

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing