AgriEval: A Comprehensive Chinese Agricultural Benchmark for Large Language Models

Authors

  • Lian Yan Harbin Institute of Technology
  • Haotian Wang Harbin Institute of Technology
  • Chen Tang Institute for Advanced Algorithms Research
  • Haifeng Liu Harbin Institute of Technology
  • Tianyang Sun Harbin Institute of Technology
  • Liangliang Liu Harbin Institute of Technology
  • Yi Guan Harbin Institute of Technology
  • Jingchi Jiang Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v40i40.40716

Abstract

n the agricultural domain, the deployment of large language models (LLMs) is hindered by the lack of training data and evaluation benchmarks. To mitigate this issue, we propose AgriEval, the first comprehensive Chinese agricultural benchmark with three main characteristics: (1) Comprehensive Capability Evaluation. AgriEval covers six major agriculture categories and 29 subcategories within agriculture, addressing four core cognitive scenarios—memorization, understanding, inference, and generation. (2) High-Quality Data. The dataset is curated from university-level examinations and assignments, providing a natural and robust benchmark for assessing the capacity of LLMs to apply knowledge and make expert-like decisions. (3) Diverse Formats and Extensive Scale. AgriEval comprises 14,697 multiple-choice questions and 2,167 open-ended question-and-answer questions, establishing it as the most extensive agricultural benchmark available to date. We also present comprehensive experimental results over 51 open-source and commercial LLMs. The experimental results reveal that most existing LLMs struggle to achieve 60 percent accuracy, underscoring the developmental potential in agricultural LLMs. Additionally, we conduct extensive experiments to investigate factors influencing model performance and propose strategies for enhancement.

Downloads

Published

2026-03-14

How to Cite

Yan, L., Wang, H., Tang, C., Liu, H., Sun, T., Liu, L., … Jiang, J. (2026). AgriEval: A Comprehensive Chinese Agricultural Benchmark for Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(40), 34205–34213. https://doi.org/10.1609/aaai.v40i40.40716

Issue

Section

AAAI Technical Track on Natural Language Processing V