Training on the Test Set: Mapping the System-Problem Space in AI


  • José Hernández-Orallo Universitat Politècnica de València Leverhulme Centre for the Future of Intelligence
  • Wout Schellaert Universitat Politècnica de València
  • Fernando Martínez-Plumed Joint Research Centre, European Commission Universitat Politècnica de València



AI Evaluation, Robustness, Conditional Probability Estimators, Machine Learning


Many present and future problems associated with artificial intelligence are not due to its limitations, but to our poor assessment of its behaviour. Our evaluation procedures produce aggregated performance metrics that lack detail and quantified uncertainty about the following question: how will an AI system, with a particular profile \pi, behave for a new problem, characterised by a particular situation \mu? Instead of just aggregating test results, we can use machine learning methods to fully capitalise on this evaluation information. In this paper, we introduce the concept of an assessor model, \hat{R}(r|\pi,\mu), a conditional probability estimator trained on test data. We discuss how these assessors can be built by using information of the full system-problem space and illustrate a broad range of applications that derive from varied inferences and aggregations from \hat{R}. Building good assessor models will change the predictive and explanatory power of AI evaluation and will lead to new research directions for building and using them. We propose accompanying every deployed AI system with its own assessor.




How to Cite

Hernández-Orallo, J., Schellaert, W., & Martínez-Plumed, F. (2022). Training on the Test Set: Mapping the System-Problem Space in AI. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12256-12261.