RLMR: Reinforcement Learning with Mixed Rewards for Creative Writing

Authors

  • JianXing Liao Tencent Hunyuan Team
  • Tian Zhang Tencent Hunyuan Team
  • Xiao Feng Tencent Hunyuan Team
  • Yusong Zhang Tencent Hunyuan Team
  • Haorui Wang Tencent Hunyuan Team
  • Bosi Wen Tsinghua University
  • Ziying Wang Peking University
  • Runzhi Shi Peking University

DOI:

https://doi.org/10.1609/aaai.v40i38.40467

Abstract

Large language models are extensively utilized in creative writing applications. Creative writing requires a balance between subjective writing quality (e.g., literariness and emotional expression) and objective constraint following (e.g., format requirements and word limits). Existing reinforcement learning methods struggle to balance these two aspects: single reward strategies fail to improve both abilities simultaneously, while fixed-weight mixed-reward methods lack the ability to adapt to different writing scenarios. To address this problem, we propose Reinforcement Learning with Mixed Rewards (RLMR), utilizing a dynamically mixed reward system from a writing reward model evaluating subjective writing quality and a constraint verification model assessing objective constraint following. The constraint following reward weight is adjusted dynamically according to the writing quality within sampled groups, ensuring that samples violating constraints get negative advantage in GRPO and thus penalized during training, which is the key innovation of this proposed method. We conduct automated and manual evaluations across diverse model families from 8B to 72B parameters. Additionally, we construct a real-world writing benchmark named WriteEval for comprehensive evaluation. Results illustrate that our method achieves consistent improvements in both instruction following (IFEval from 83.36% to 86.65%) and writing quality (72.75% win rate in manual expert pairwise evaluations on WriteEval). To the best of our knowledge, RLMR is the first work to combine subjective preferences with objective verification in online RL training, providing an effective solution for multi-dimensional creative writing optimization.

Downloads

Published

2026-03-14

How to Cite

Liao, J., Zhang, T., Feng, X., Zhang, Y., Wang, H., Wen, B., … Shi, R. (2026). RLMR: Reinforcement Learning with Mixed Rewards for Creative Writing. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 31970–31978. https://doi.org/10.1609/aaai.v40i38.40467

Issue

Section

AAAI Technical Track on Natural Language Processing III