Differentiated Distribution Recovery for Neural Text Generation

Authors

  • Jianing Li Chinese Academy of Sciences
  • Yanyan Lan Chinese Academy of Sciences
  • Jiafeng Guo Chinese Academy of Sciences
  • Jun Xu Chinese Academy of Sciences
  • Xueqi Cheng Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v33i01.33016682

Abstract

Neural language models based on recurrent neural networks (RNNLM) have significantly improved the performance for text generation, yet the quality of generated text represented by Turing Test pass rate is still far from satisfying. Some researchers propose to use adversarial training or reinforcement learning to promote the quality, however, such methods usually introduce great challenges in the training and parameter tuning processes. Through our analysis, we find the problem of RNNLM comes from the usage of maximum likelihood estimation (MLE) as the objective function, which requires the generated distribution to precisely recover the true distribution. Such requirement favors high generation diversity which restricted the generation quality. This is not suitable when the overall quality is low, since high generation diversity usually indicates lot of errors rather than diverse good samples. In this paper, we propose to achieve differentiated distribution recovery, DDR for short. The key idea is to make the optimal generation probability proportional to the β-th power of the true probability, where β > 1. In this way, the generation quality can be greatly improved by sacrificing diversity from noises and rare patterns. Experiments on synthetic data and two public text datasets show that our DDR method achieves more flexible quality-diversity trade-off and higher Turing Test pass rate, as compared with baseline methods including RNNLM, SeqGAN and LeakGAN.

Downloads

Published

2019-07-17

How to Cite

Li, J., Lan, Y., Guo, J., Xu, J., & Cheng, X. (2019). Differentiated Distribution Recovery for Neural Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6682-6689. https://doi.org/10.1609/aaai.v33i01.33016682

Issue

Section

AAAI Technical Track: Natural Language Processing