How Much Do Large Language Model Cheat on Evaluation? Benchmarking Overestimation Under the One-Time-Pad-Based Framework

Authors

  • Zi Liang The Hong Kong Polytechnic University
  • Liantong Yu The Hong Kong Polytechnic University
  • Zhang Shiyu The Hong Kong Polytechnic University
  • Qingqing Ye The Hong Kong Polytechnic University
  • Haibo Hu The Hong Kong Polytechnic University PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems

DOI:

https://doi.org/10.1609/aaai.v40i44.41098

Abstract

Overestimation in evaluating large language models (LLMs) has become an increasing concern. Due to the contamination of public benchmarks or imbalanced model training, LLMs may achieve unreal evaluation results on public benchmarks, either intentionally or unintentionally, which leads to unfair comparisons among LLMs and undermines their realistic capability assessments. Existing benchmarks attempt to address these issues by keeping test cases permanently secret, mitigating contamination through human evaluation, or repeatedly collecting and constructing new samples. However, these approaches fail to ensure reproducibility, transparency, and high efficiency simultaneously. Moreover, the extent of overestimation in current LLMs remains unquantified. To address these issues, we propose ArxivRoll, a dynamic evaluation framework inspired by one-time pad encryption in cryptography. ArxivRoll comprises two key components: i) SCP (Sequencing, Cloze, and Prediction), an automated generator for private test cases, and ii) Rugged Scores (RS), metrics that measure the proportion of public benchmark contamination and training bias. Leveraging SCP, ArxivRoll constructs a new benchmark every six months using recent articles from ArXiv and employs them for one-time evaluations of LLM performance. Extensive experiments demonstrate the high quality of our benchmark, and we provide a systematic evaluation of current LLMs.

Downloads

Published

2026-03-14

How to Cite

Liang, Z., Yu, L., Shiyu, Z., Ye, Q., & Hu, H. (2026). How Much Do Large Language Model Cheat on Evaluation? Benchmarking Overestimation Under the One-Time-Pad-Based Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 37636–37644. https://doi.org/10.1609/aaai.v40i44.41098

Issue

Section

AAAI Special Track on AI Alignment