Preference-Controlled Multi-Objective Reinforcement Learning for Conditional Text Generation
DOI:
https://doi.org/10.1609/aaai.v37i11.26490Keywords:
SNLP: Generation, SNLP: ApplicationsAbstract
Conditional text generation is to generate text sequences conditioning on linguistic or non-linguistic data. The main line of existing work proposed deterministic models to improve the fidelity of the generated text but often ignored the diversity. Another line relied on conditional variational auto-encoders (CVAEs), which increased the diversity over their deterministic backbones. However, CVAEs regard diversity as an implicit objective and may not be optimal. In this paper, we raise two questions: i) Can diversity be further improved with an explicit objective? ii) Since fidelity and diversity are two conflicting objectives, how can we obtain different multi-objective optimal solutions according to user preferences? To answer question i), we propose a multi-objective reinforcement learning (MORL) method which explicitly takes CIDEr and Self-CIDEr scores as the fidelity-oriented and diversity-oriented rewards respectively. To answer question ii), we propose a preference-controlled MORL method, which can obtain infinite multi-objective optimal solutions by tuning the preference variable. We conduct extensive experiments on paraphrasing and image captioning tasks, which show that in the fidelity-diversity trade-off space, our model outperforms both deterministic and CVAE-based baselines.Downloads
Published
2023-06-26
How to Cite
Chen, W., Tian, J., Fan, C., Li, Y., He, H., & Jin, Y. (2023). Preference-Controlled Multi-Objective Reinforcement Learning for Conditional Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12662-12672. https://doi.org/10.1609/aaai.v37i11.26490
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing