ReCode: Updating Code API Knowledge with Reinforcement Learning

Authors

  • Haoze Wu Zhejiang University
  • Yunzhi Yao Zhejiang University
  • Wenhao Yu Tencent AI Lab
  • Ningyu Zhang Zhejiang University State Key Lab. for Novel Software Technology, Nanjing University, P.R. China

DOI:

https://doi.org/10.1609/aaai.v40i40.40683

Abstract

Large Language Models (LLMs) exhibit remarkable code generation capabilities but falter when adapting to frequent updates in external library APIs. This critical limitation, stemming from reliance on outdated API knowledge from their training data, even with access to current documentation, impedes reliable code generation in dynamic environments. To tackle this issue, we propose ReCode (rule-based Reinforcement learning for Code Update), a novel framework that mimics human programmer adaptation to API changes. Specifically, we construct a dataset of approximately 2,000 data entries to train the LLMs to perform version migration based on updated information. Then, we introduce a modified string similarity metric for code evaluation as the reward for reinforcement learning. Our experiments demonstrate that ReCode substantially boosts LLMs' code generation performance in dynamic API scenarios, especially on the unseen CodeUpdateArena task. Crucially, compared to supervised fine-tuning, ReCode has less impact on LLMs' general code generation abilities. We apply ReCode on various LLMs and reinforcement learning algorithms (GRPO and DAPO), all achieving consistent improvements. Notably, after training, Qwen2.5-Coder-7B outperforms that of the 32B parameter code instruction-tuned model and the reasoning model with the same architecture.

Downloads

Published

2026-03-14

How to Cite

Wu, H., Yao, Y., Yu, W., & Zhang, N. (2026). ReCode: Updating Code API Knowledge with Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(40), 33908-33916. https://doi.org/10.1609/aaai.v40i40.40683

Issue

Section

AAAI Technical Track on Natural Language Processing V