Conversational Model Adaptation via KL Divergence Regularization

Authors

  • Juncen Li Tencent
  • Ping Luo Institute of Computing Technology, CAS, Beijing; University of Chinese Academy of Sciences, Beijing
  • Fen Lin Tencent
  • Bo Chen Tencent

DOI:

https://doi.org/10.1609/aaai.v32i1.11953

Abstract

In this study we formulate the problem of conversational model adaptation, where we aim to build a generative conversational model for a target domain based on a limited amount of dialogue data from this target domain and some existing dialogue models from related source domains. This model facilitates the fast building of a chatbot platform, where a new vertical chatbot with only a small number of conversation data can be supported by other related mature chatbots. Previous studies on model adaptation and transfer learning mostly focus on classification and recommendation problems, however, how these models work for conversation generation are still unexplored. To this end, we leverage a KL divergence (KLD) regularization to adapt the existing conversational models. Specifically, it employs the KLD to measure the distance between source and target domain. Adding KLD as a regularization to the objective function allows the proposed method to utilize the information from source domains effectively. We also evaluate the performance of this adaptation model for the online chatbots in Wechat platform of public accounts using both the BLEU metric and human judgement. The experiments empirically show that the proposed method visibly improves these evaluation metrics.

Downloads

Published

2018-04-27

How to Cite

Li, J., Luo, P., Lin, F., & Chen, B. (2018). Conversational Model Adaptation via KL Divergence Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11953