Coherent Dialogue with Attention-Based Language Models

Authors

  • Hongyuan Mei Johns Hopkins University
  • Mohit Bansal The University of North Carolina at Chapel Hill
  • Matthew Walter Toyota Technological Institute at Chicago

DOI:

https://doi.org/10.1609/aaai.v31i1.10961

Keywords:

Coherent dialogue, Dialogue system, Neural attention, Neural language model, Neural dialogue model

Abstract

We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.

Downloads

Published

2017-02-12

How to Cite

Mei, H., Bansal, M., & Walter, M. (2017). Coherent Dialogue with Attention-Based Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10961