DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances

Authors

  • Xiaodong Gu School of Software, Shanghai Jiao Tong University, China
  • Kang Min Yoo NAVER AI Lab, Korea
  • Jung-Woo Ha NAVER AI Lab, Korea

DOI:

https://doi.org/10.1609/aaai.v35i14.17527

Keywords:

Conversational AI/Dialog Systems

Abstract

Recent advances in pre-trained language models have significantly improved neural response generation. However, existing methods usually view the dialogue context as a linear sequence of tokens and learn to generate the next word through token-level self-attention. Such token-level encoding hinders the exploration of discourse-level coherence among utterances. This paper presents DialogBERT, a novel conversational response generation model that enhances previous PLM-based dialogue models. DialogBERT employs a hierarchical Transformer architecture. To efficiently capture the discourse-level coherence among utterances, we propose two training objectives, including masked utterance regression and distributed utterance order ranking in analogy to the original BERT training. Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms three baselines, such as BART and DialoGPT, in terms of quantitative evaluation. The human evaluation suggests that DialogBERT generates more coherent, informative, and human-like responses than the baselines with significant margins.

Downloads

Published

2021-05-18

How to Cite

Gu, X., Yoo, K. M., & Ha, J.-W. (2021). DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12911-12919. https://doi.org/10.1609/aaai.v35i14.17527

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I