DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation

Authors

  • Qiming Zhu Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Jialun Cao The Hong Kong University of Science and Technology, Hong Kong, China
  • Yaojie Lu Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China
  • Hongyu Lin Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China
  • Xianpei Han Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China
  • Le Sun Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China
  • Shing-Chi Cheung The Hong Kong University of Science and Technology, Hong Kong, China

DOI:

https://doi.org/10.1609/aaai.v39i24.34811

Abstract

Code benchmarks such as HumanEval are widely adopted to evaluate the capabilities of Large Language Models (LLMs), providing insights into their strengths and weaknesses. However, current benchmarks primarily exercise LLMs' capability on common coding tasks (e.g., bubble sort, greatest common divisor), leaving domain-specific coding tasks (e.g., computation, system, cryptography) unexplored. To fill this gap, we propose a multi-domain code benchmark, DOMAINEVAL, designed to evaluate LLMs' coding capabilities thoroughly. Our pipeline works in a fully automated manner, enabling a push-button construction from code repositories into formatted subjects under study. Interesting findings are observed by evaluating 12 representative LLMs against DOMAINEVAL. We notice that LLMs are generally good at computation tasks while falling short on cryptography and system coding tasks. The performance gap can be as much as 68.94% (80.94% - 12.0%) in some LLMs. We also observe that generating more samples can increase the overall performance of LLMs, while the domain bias may even increase. The contributions of this study include a code generation benchmark dataset DOMAINEVAL, encompassing six popular domains, a fully automated pipeline for constructing code benchmarks, and an identification of the limitations of LLMs in code generation tasks based on their performance on DOMAINEVAL, providing directions for future research improvements.

Published

2025-04-11

How to Cite

Zhu, Q., Cao, J., Lu, Y., Lin, H., Han, X., Sun, L., & Cheung, S.-C. (2025). DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 26148–26156. https://doi.org/10.1609/aaai.v39i24.34811

Issue

Section

AAAI Technical Track on Natural Language Processing III