AdaCuRL: Adaptive Curriculum Reinforcement Learning with Invalid Sample Mitigation and Historical Revisiting

Authors

  • Renda Li Alibaba Group
  • Hailang Huang Alibaba Group
  • Fei Wei Alibaba Group
  • Feng Xiong Alibaba Group
  • Yong Wang Alibaba Group
  • Xiangxiang Chu Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v40i27.39479

Abstract

Reinforcement learning (RL) has demonstrated considerable potential for enhancing reasoning in large language models (LLMs). However, existing methods suffer from Gradient Starvation and Policy Degradation when training directly on samples with mixed difficulty. To mitigate this, prior approaches leverage Chain-of-Thought (CoT) data, but the construction of high-quality CoT annotations remains labor-intensive. Alternatively, curriculum learning strategies have been explored but frequently encounter challenges, such as difficulty mismatch, reliance on manual curriculum design, and catastrophic forgetting. To address these issues, we propose AdaCuRL, a Adaptive Curriculum Reinforcement Learning framework that integrates coarse-to-fine difficulty estimation with adaptive curriculum scheduling. This approach dynamically aligns data difficulty with model capability and incorporates a data revisitation mechanism to mitigate catastrophic forgetting. Furthermore, AdaCuRL employs adaptive reference and sparse KL strategies to prevent Policy Degradation. Extensive experiments across diverse reasoning benchmarks demonstrate that AdaCuRL consistently achieves significant performance improvements on both LLMs and MLLMs.

Downloads

Published

2026-03-14

How to Cite

Li, R., Huang, H., Wei, F., Xiong, F., Wang, Y., & Chu, X. (2026). AdaCuRL: Adaptive Curriculum Reinforcement Learning with Invalid Sample Mitigation and Historical Revisiting. Proceedings of the AAAI Conference on Artificial Intelligence, 40(27), 23123-23131. https://doi.org/10.1609/aaai.v40i27.39479

Issue

Section

AAAI Technical Track on Machine Learning IV