MedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models

Authors

  • Yan Cai East China Normal University
  • Linlin Wang East China Normal University Shanghai Artificial Intelligence Laboratory
  • Ye Wang East China Normal University
  • Gerard de Melo Hasso Plattner Institute University of Potsdam
  • Ya Zhang Shanghai Jiao Tong University Shanghai Artificial Intelligence Laboratory
  • Yanfeng Wang Shanghai Jiao Tong University Shanghai Artificial Intelligence Laboratory
  • Liang He East China Normal University

DOI:

https://doi.org/10.1609/aaai.v38i16.29723

Keywords:

NLP: (Large) Language Models, NLP: Interpretability, Analysis, and Evaluation of NLP Models

Abstract

The emergence of various medical large language models (LLMs) in the medical domain has highlighted the need for unified evaluation standards, as manual evaluation of LLMs proves to be time-consuming and labor-intensive. To address this issue, we introduce MedBench, a comprehensive benchmark for the Chinese medical domain, comprising 40,041 questions sourced from authentic examination exercises and medical reports of diverse branches of medicine. In particular, this benchmark is composed of four key components: the Chinese Medical Licensing Examination, the Resident Standardization Training Examination, the Doctor In-Charge Qualification Examination, and real-world clinic cases encompassing examinations, diagnoses, and treatments. MedBench replicates the educational progression and clinical practice experiences of doctors in Mainland China, thereby establish- ing itself as a credible benchmark for assessing the mastery of knowledge and reasoning abilities in medical language learning models. We perform extensive experiments and conduct an in-depth analysis from diverse perspectives, which culminate in the following findings: (1) Chinese medical LLMs underperform on this benchmark, highlighting the need for significant advances in clinical knowledge and diagnostic precision. (2) Several general-domain LLMs surprisingly possess considerable medical knowledge. These findings elucidate both the capabilities and limitations of LLMs within the context of MedBench, with the ultimate goal of aiding the medical research community.

Published

2024-03-24

How to Cite

Cai, Y., Wang, L., Wang, Y., de Melo, G., Zhang, Y., Wang, Y., & He, L. (2024). MedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17709-17717. https://doi.org/10.1609/aaai.v38i16.29723

Issue

Section

AAAI Technical Track on Natural Language Processing I