Insufficient Data Can Also Rock! Learning to Converse Using Smaller Data with Augmentation


  • Juntao Li Peking University
  • Lisong Qiu Peking University
  • Bo Tang Southern University of Science and Technology
  • Dongmin Chen Peking University
  • Dongyan Zhao Peking University
  • Rui Yan Peking University



Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural networks. The effectiveness of deep neural network models depends on the amount of training data. As it is laboursome and expensive to acquire a huge amount of data in most scenarios, how to effectively utilize existing data is the crux of this issue. In this paper, we use data augmentation techniques to improve the performance of neural dialogue models on the condition of insufficient data. Specifically, we propose a novel generative model to augment existing data, where the conditional variational autoencoder (CVAE) is employed as the generator to output more training data with diversified expressions. To improve the correlation of each augmented training pair, we design a discriminator with adversarial training to supervise the augmentation process. Moreover, we thoroughly investigate various data augmentation schemes for neural dialogue system with generative models, both GAN and CVAE. Experimental results on two open corpora, Weibo and Twitter, demonstrate the superiority of our proposed data augmentation model.




How to Cite

Li, J., Qiu, L., Tang, B., Chen, D., Zhao, D., & Yan, R. (2019). Insufficient Data Can Also Rock! Learning to Converse Using Smaller Data with Augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6698-6705.



AAAI Technical Track: Natural Language Processing