TreeEval: Benchmark-Free Evaluation of Large Language Models through Tree Planning
DOI:
https://doi.org/10.1609/aaai.v39i23.34627Abstract
Recently, numerous new benchmarks have been established to evaluate the performance of large language models (LLMs) via either computing a holistic score or employing another LLM as a judge. However, these approaches suffer from data leakage due to the open access of the benchmark and inflexible evaluation process. To address this issue, we introduce TreeEval, a benchmark-free evaluation method for LLMs that let a high-performance LLM host an irreproducible evaluation session and essentially avoids the data leakage. Moreover, this LLM performs as an examiner to raise up a series of questions under a topic with a tree planing strategy, which considers the current evaluation status to decide the next question generation and ensures the completeness and efficiency of the evaluation process. We evaluate 6 models of different parameter sizes, including 7B, 13B, and 34B, and ultimately achieved the highest correlation coefficient with AlpacaEval2.0 using only around 45 questions. We also conduct more analysis to show the robustness and reliability of TreeEval.Downloads
Published
2025-04-11
How to Cite
Li, X., Lan, Y., & Yang, C. (2025). TreeEval: Benchmark-Free Evaluation of Large Language Models through Tree Planning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24485–24493. https://doi.org/10.1609/aaai.v39i23.34627
Issue
Section
AAAI Technical Track on Natural Language Processing II