Towards Diverse, Relevant and Coherent Open-Domain Dialogue Generation via Hybrid Latent Variables

Authors

  • Bin Sun Beijing Institute of Technology, China
  • Yitong Li Huawei Technologies Co., Ltd. Huawei Noah's Ark Lab
  • Fei Mi Huawei Noah's Ark Lab
  • Weichao Wang Huawei Noah's Ark Lab
  • Yiwei Li Beijing Institute of Technology, China
  • Kan Li Beijing Insitiute of Technology, China

DOI:

https://doi.org/10.1609/aaai.v37i11.26594

Keywords:

SNLP: Conversational AI/Dialogue Systems, SNLP: Generation

Abstract

Conditional variational models, using either continuous or discrete latent variables, are powerful for open-domain dialogue response generation. However, previous works show that continuous latent variables tend to reduce the coherence of generated responses. In this paper, we also found that discrete latent variables have difficulty capturing more diverse expressions. To tackle these problems, we combine the merits of both continuous and discrete latent variables and propose a Hybrid Latent Variable (HLV) method. Specifically, HLV constrains the global semantics of responses through discrete latent variables and enriches responses with continuous latent variables. Thus, we diversify the generated responses while maintaining relevance and coherence. In addition, we propose Conditional Hybrid Variational Transformer (CHVT) to construct and to utilize HLV with transformers for dialogue generation. Through fine-grained symbolic-level semantic information and additive Gaussian mixing, we construct the distribution of continuous variables, prompting the generation of diverse expressions. Meanwhile, to maintain the relevance and coherence, the discrete latent variable is optimized by self-separation training. Experimental results on two dialogue generation datasets (DailyDialog and Opensubtitles) show that CHVT is superior to traditional transformer-based variational mechanism w.r.t. diversity, relevance and coherence metrics. Moreover, we also demonstrate the benefit of applying HLV to fine-tuning two pre-trained dialogue models (PLATO and BART-base).

Downloads

Published

2023-06-26

How to Cite

Sun, B., Li, Y., Mi, F., Wang, W., Li, Y., & Li, K. (2023). Towards Diverse, Relevant and Coherent Open-Domain Dialogue Generation via Hybrid Latent Variables. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13600-13608. https://doi.org/10.1609/aaai.v37i11.26594

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing