Relevance-Promoting Language Model for Short-Text Conversation

Authors

  • Xin Li The Chinese University of Hong Kong
  • Piji Li Tencent AI Lab
  • Wei Bi Tencent AI Lab
  • Xiaojiang Liu Tencent AI Lab
  • Wai Lam The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v34i05.6340

Abstract

Despite the effectiveness of sequence-to-sequence framework on the task of Short-Text Conversation (STC), the issue of under-exploitation of training data (i.e., the supervision signals from query text is ignored) still remains unresolved. Also, the adopted maximization-based decoding strategies, inclined to generating the generic responses or responses with repetition, are unsuited to the STC task. In this paper, we propose to formulate the STC task as a language modeling problem and tailor-make a training strategy to adapt a language model for response generation. To enhance generation performance, we design a relevance-promoting transformer language model, which performs additional supervised source attention after the self-attention to increase the importance of informative query tokens in calculating the token-level representation. The model further refines the query representation with relevance clues inferred from its multiple references during training. In testing, we adopt a randomization-over-maximization strategy to reduce the generation of generic responses. Experimental results on a large Chinese STC dataset demonstrate the superiority of the proposed model on relevance metrics and diversity metrics.1

Downloads

Published

2020-04-03

How to Cite

Li, X., Li, P., Bi, W., Liu, X., & Lam, W. (2020). Relevance-Promoting Language Model for Short-Text Conversation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8253-8260. https://doi.org/10.1609/aaai.v34i05.6340

Issue

Section

AAAI Technical Track: Natural Language Processing