Evaluating LLM Reasoning in the Operations Research Domain with ORQA
DOI:
https://doi.org/10.1609/aaai.v39i23.34673Abstract
In this paper, we introduce and apply Operations Research Question Answering (ORQA), a new benchmark, to assess the generalization capabilities of Large Language Models (LLMs) in the specialized technical domain of Operations Research (OR). This benchmark is designed to evaluate whether LLMs can emulate the knowledge and reasoning skills of OR experts when given diverse and complex optimization problems. The dataset, crafted by OR experts, presents real-world optimization problems that require multistep reasoning to build their mathematical models. Our evaluations of various open-source LLMs, such as LLaMA 3.1, DeepSeek, and Mixtral reveal their modest performance, indicating a gap in their aptitude to generalize to specialized technical domains. This work contributes to the ongoing discourse on LLMs’ generalization capabilities, providing insights for future research in this area. The dataset and evaluation code are publicly available.Downloads
Published
2025-04-11
How to Cite
Mostajabdaveh, M., Yu, T. T. L., Dash, S. C. B., Ramamonjison, R., Byusa, J. S., Carenini, G., Zhou, Z., & Zhang, Y. (2025). Evaluating LLM Reasoning in the Operations Research Domain with ORQA. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24902-24910. https://doi.org/10.1609/aaai.v39i23.34673
Issue
Section
AAAI Technical Track on Natural Language Processing II