LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction

Authors

  • Yucheng Li University of Surrey
  • Frank Guerin University of Surrey
  • Chenghua Lin University of Manchester

DOI:

https://doi.org/10.1609/aaai.v38i17.29822

Keywords:

NLP: Interpretability, Analysis, and Evaluation of NLP Models, NLP: (Large) Language Models

Abstract

Data contamination in evaluation is getting increasingly prevalent with the emergence of language models pre-trained on super large, automatically crawled corpora. This problem leads to significant challenges in the accurate assessment of model capabilities and generalisations. In this paper, we propose LatestEval, an automatic method that leverages the most recent texts to create uncontaminated reading comprehension evaluations. LatestEval avoids data contamination by only using texts published within a recent time window, ensuring no overlap with the training corpora of pre-trained language models. We develop the LatestEval automated pipeline to 1) gather the latest texts; 2) identify key information, and 3) construct questions targeting the information while removing the existing answers from the context. This encourages models to infer the answers themselves based on the remaining context, rather than just copy-paste. Our experiments demonstrate that language models exhibit negligible memorisation behaviours on LatestEval as opposed to previous benchmarks, suggesting a significantly reduced risk of data contamination and leading to a more robust evaluation. Data and code are publicly available at: https://github.com/liyucheng09/LatestEval.

Published

2024-03-24

How to Cite

Li, Y., Guerin, F., & Lin, C. (2024). LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18600-18607. https://doi.org/10.1609/aaai.v38i17.29822

Issue

Section

AAAI Technical Track on Natural Language Processing II