Customized Nonlinear Bandits for Online Response Selection in Neural Conversation Models

Authors

  • Bing Liu Carnegie Mellon University
  • Tong Yu Carnegie Mellon University
  • Ian Lane Carnegie Mellon University
  • Ole Mengshoel Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v32i1.12028

Keywords:

Bandit, Neural Network, Dialog, Response Selection

Abstract

Dialog response selection is an important step towards natural response generation in conversational agents. Existing work on neural conversational models mainly focuses on offline supervised learning using a large set of context-response pairs. In this paper, we focus on online learning of response selection in retrieval-based dialog systems. We propose a contextual multi-armed bandit model with a nonlinear reward function that uses distributed representation of text for online response selection. A bidirectional LSTM is used to produce the distributed representations of dialog context and responses, which serve as the input to a contextual bandit. In learning the bandit, we propose a customized Thompson sampling method that is applied to a polynomial feature space in approximating the reward. Experimental results on the Ubuntu Dialogue Corpus demonstrate significant performance gains of the proposed method over conventional linear contextual bandits. Moreover, we report encouraging response selection performance of the proposed neural bandit model using the Recall@k metric for a small set of online training samples.

Downloads

Published

2018-04-27

How to Cite

Liu, B., Yu, T., Lane, I., & Mengshoel, O. (2018). Customized Nonlinear Bandits for Online Response Selection in Neural Conversation Models. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12028