Controlling Global Statistics in Recurrent Neural Network Text Generation

Authors

  • Thanapon Noraset Northwestern University
  • David Demeter Northwestern University
  • Doug Downey Northwestern University

DOI:

https://doi.org/10.1609/aaai.v32i1.11993

Keywords:

recurrent neural network, natural language generation, regularization

Abstract

Recurrent neural network language models (RNNLMs) are an essential component for many language generation tasks such as machine translation, summarization, and automated conversation. Often, we would like to subject the text generated by the RNNLM to constraints, in order to overcome systemic errors (e.g. word repetition) or achieve application-specific goals (e.g. more positive sentiment). In this paper, we present a method for training RNNLMs to simultaneously optimize likelihood and follow a given set of statistical constraints on text generation.  The problem is challenging because the statistical constraints are defined over aggregate model behavior, rather than model parameters, meaning that a straightforward parameter regularization approach is insufficient.  We solve this problem using a dynamic regularizer that updates as training proceeds, based on the generative behavior of the RNNLMs.  Our experiments show that the dynamic regularizer outperforms both generic training and a static regularization baseline.  The approach is successful at improving word-level repetition statistics by a factor of four in RNNLMs on a definition modeling task.  It also improves model perplexity when the statistical constraints are $n$-gram statistics taken from a large corpus.

Downloads

Published

2018-04-27

How to Cite

Noraset, T., Demeter, D., & Downey, D. (2018). Controlling Global Statistics in Recurrent Neural Network Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11993