ARBench: Algorithmic Reasoner or API Alchemist? Evaluating LLMs Beyond API Calls

Authors

  • Ren-Biao Liu Nanjing University
  • Chao-Zeng Ma Nanjing University
  • Anqi Li Nanjing University
  • Hui Sun Nanjing University
  • Xin-Ye Li Nanjing University
  • Ming Li Nanjing University

DOI:

https://doi.org/10.1609/aaai.v40i38.40482

Abstract

Large Language Models (LLMs) have demonstrated impressive capabilities in code generation. Like human programmers, LLMs tend to call high-level APIs and libraries to program efficiently. However, this shortcut may hinder LLMs from learning the essential algorithm reasoning, leading instead to rote memorization of API usage. As a result, LLMs often struggle to generalize to new or domain-specific algorithms that lack ready-made library support. In this work, we propose ARBench, a novel benchmark for evaluating LLMs’ ability to generate machine learning algorithms from scratch, beyond merely invoking high-level APIs. It emphasizes algorithmic reasoning and implementation, distinguishing genuine understanding from superficial API usage. It covers fundamental and advanced machine learning tasks, rigorously assessing current LLMs’ capacity to implement these algorithms from scratch. Our evaluation reveals the strengths and weaknesses of state-of-the-art LLMs in algorithmic reasoning and generalization, offering valuable insights to guide future research and development.

Downloads

Published

2026-03-14

How to Cite

Liu, R.-B., Ma, C.-Z., Li, A., Sun, H., Li, X.-Y., & Li, M. (2026). ARBench: Algorithmic Reasoner or API Alchemist? Evaluating LLMs Beyond API Calls. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32105–32113. https://doi.org/10.1609/aaai.v40i38.40482

Issue

Section

AAAI Technical Track on Natural Language Processing III