TextGAIL: Generative Adversarial Imitation Learning for Text Generation

Authors

  • Qingyang Wu University of California, Davis
  • Lei Li ByteDance AI Lab
  • Zhou Yu University of California, Davis

Keywords:

Generation

Abstract

Generative Adversarial Networks (GANs) for text generation have recently received many criticisms, as they perform worse than their MLE counterparts. We suspect previous text GANs' inferior performance is due to the lack of a reliable guiding signal in their discriminators. To address this problem, we propose a generative adversarial imitation learning framework for text generation that uses large pre-trained language models to provide more reliable reward guidance. As previous text GANs suffer from high variance of gradients, we apply contrastive discriminator, and proximal policy optimization (PPO) to stabilize and improve text generation performance. For evaluation, we conduct experiments on a diverse set of unconditional and conditional text generation tasks. Experimental results show that TextGAIL achieves better performance in terms of both quality and diversity than the MLE baseline. We also validate our intuition that TextGAIL's discriminator demonstrates the capability of providing reasonable rewards with an additional task.

Downloads

Published

2021-05-18

How to Cite

Wu, Q., Li, L., & Yu, Z. (2021). TextGAIL: Generative Adversarial Imitation Learning for Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14067-14075. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17656

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III