Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning


  • Dayiheng Liu Sichuan University
  • Jie Fu Polytechnique Montreal
  • Yidan Zhang Sichuan University
  • Chris Pal Polytechnique Montreal
  • Jiancheng Lv Sichuan University



Typical methods for unsupervised text style transfer often rely on two key ingredients: 1) seeking the explicit disentanglement of the content and the attributes, and 2) troublesome adversarial learning. In this paper, we show that neither of these components is indispensable. We propose a new framework that utilizes the gradients to revise the sentence in a continuous space during inference to achieve text style transfer. Our method consists of three key components: a variational auto-encoder (VAE), some attribute predictors (one for each attribute), and a content predictor. The VAE and the two types of predictors enable us to perform gradient-based optimization in the continuous space, which is mapped from sentences in a discrete space, to find the representation of a target sentence with the desired attributes and preserved content. Moreover, the proposed method naturally has the ability to simultaneously manipulate multiple fine-grained attributes, such as sentence length and the presence of specific words, when performing text style transfer tasks. Compared with previous adversarial learning based methods, the proposed method is more interpretable, controllable and easier to train. Extensive experimental studies on three popular text style transfer tasks show that the proposed method significantly outperforms five state-of-the-art methods.




How to Cite

Liu, D., Fu, J., Zhang, Y., Pal, C., & Lv, J. (2020). Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8376-8383.



AAAI Technical Track: Natural Language Processing