Change or Not: A Simple Approach for Plug and Play Language Models on Sentiment Control


  • Chen Xu Beijing University of Technology
  • Jianyu Zhao Lenovo Research
  • Rang Li Lenovo Research
  • Changjian Hu Lenovo Research
  • Chuangbai Xiao Beijing University of Technology



Natural Language Generation, Deep Learning, Controlled Text Generation, Sentiment Control, Language Model


Text generation with sentiment control is difficult without fine-tuning or modifying the model architecture. Plug and Play Language Model (PPLM) utilizes an external sentiment classifier to update the hidden states of GPT-2 at each time step. It does not change the parameters but achieves competitive performance. However, fluency is impaired due to the instability of the hidden states. Moreover, the classifier is not strong because of the way it is trained with partial texts, hence it is difficult to guide the generation in the process. To solve the above problems, in this paper, we first propose a fixed threshold method based on the Valence-Arousal-Dominance (VAD) lexicon to decide whether to change a word, which keeps the fluency of the original LM to the greatest extent. Furthermore, for the improvement of sentiment alignment, we propose a dynamic threshold method that utilizes VAD-based loss to make the threshold dynamic. Experiments demonstrate that our methods outperform the baseline with a great margin significantly both on fluency and sentiment accuracy.




How to Cite

Xu, C., Zhao, J., Li, R., Hu, C., & Xiao, C. (2021). Change or Not: A Simple Approach for Plug and Play Language Models on Sentiment Control. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15935-15936.



AAAI Student Abstract and Poster Program