ESRL: Efficient Sampling-Based Reinforcement Learning for Sequence Generation

Authors

  • Chenglong Wang School of Computer Science and Engineering, Northeastern University, Shenyang, China
  • Hang Zhou School of Computer Science and Engineering, Northeastern University, Shenyang, China
  • Yimin Hu School of Computer Science and Engineering, Northeastern University, Shenyang, China
  • Yifu Huo School of Computer Science and Engineering, Northeastern University, Shenyang, China
  • Bei Li School of Computer Science and Engineering, Northeastern University, Shenyang, China
  • Tongran Liu CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China
  • Tong Xiao School of Computer Science and Engineering, Northeastern University, Shenyang, China NiuTrans Research, Shenyang, China
  • Jingbo Zhu School of Computer Science and Engineering, Northeastern University, Shenyang, China NiuTrans Research, Shenyang, China

DOI:

https://doi.org/10.1609/aaai.v38i17.29878

Keywords:

NLP: Generation

Abstract

Applying Reinforcement Learning (RL) to sequence generation models enables the direct optimization of long-term rewards (e.g., BLEU and human feedback), but typically requires large-scale sampling over a space of action sequences. This is a computational challenge as presented by the practice of sequence generation problems, such as machine translation, where we often deal with a large action space (e.g., a vocabulary) and a long action sequence (e.g., a translation). In this work, we introduce two-stage sampling and dynamic sampling approaches to improve the sampling efficiency during training sequence generation models via RL. We experiment with our approaches on the traditional sequence generation tasks, including machine translation and abstractive summarization. Furthermore, we evaluate our approaches in RL from human feedback (RLHF) through training a large language model using the reward model. Experimental results show that the efficient sampling-based RL, referred to as ESRL, can outperform all baselines in terms of both training efficiency and memory consumption. Notably, ESRL yields consistent performance gains over the strong REINFORCE, minimum risk training, and proximal policy optimization methods. The code is available at https://github.com/wangclnlp/DeepSpeed-Chat-Extension/examples/esrl.

Published

2024-03-24

How to Cite

Wang, C., Zhou, H., Hu, Y., Huo, Y., Li, B., Liu, T., Xiao, T., & Zhu, J. (2024). ESRL: Efficient Sampling-Based Reinforcement Learning for Sequence Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19107-19115. https://doi.org/10.1609/aaai.v38i17.29878

Issue

Section

AAAI Technical Track on Natural Language Processing II