Preference-Controlled Multi-Objective Reinforcement Learning for Conditional Text Generation


  • Wenqing Chen Sun Yat-sen University
  • Jidong Tian Shanghai Jiao Tong University
  • Caoyun Fan Shanghai Jiao Tong University
  • Yitian Li Shanghai Jiao Tong University
  • Hao He Shanghai Jiao Tong University
  • Yaohui Jin Shanghai Jiao Tong University



SNLP: Generation, SNLP: Applications


Conditional text generation is to generate text sequences conditioning on linguistic or non-linguistic data. The main line of existing work proposed deterministic models to improve the fidelity of the generated text but often ignored the diversity. Another line relied on conditional variational auto-encoders (CVAEs), which increased the diversity over their deterministic backbones. However, CVAEs regard diversity as an implicit objective and may not be optimal. In this paper, we raise two questions: i) Can diversity be further improved with an explicit objective? ii) Since fidelity and diversity are two conflicting objectives, how can we obtain different multi-objective optimal solutions according to user preferences? To answer question i), we propose a multi-objective reinforcement learning (MORL) method which explicitly takes CIDEr and Self-CIDEr scores as the fidelity-oriented and diversity-oriented rewards respectively. To answer question ii), we propose a preference-controlled MORL method, which can obtain infinite multi-objective optimal solutions by tuning the preference variable. We conduct extensive experiments on paraphrasing and image captioning tasks, which show that in the fidelity-diversity trade-off space, our model outperforms both deterministic and CVAE-based baselines.




How to Cite

Chen, W., Tian, J., Fan, C., Li, Y., He, H., & Jin, Y. (2023). Preference-Controlled Multi-Objective Reinforcement Learning for Conditional Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12662-12672.



AAAI Technical Track on Speech & Natural Language Processing