Training on the Test Set: Mapping the System-Problem Space in AI

Authors

  • José Hernández-Orallo Universitat Politècnica de València Leverhulme Centre for the Future of Intelligence
  • Wout Schellaert Universitat Politècnica de València
  • Fernando Martínez-Plumed Joint Research Centre, European Commission Universitat Politècnica de València

DOI:

https://doi.org/10.1609/aaai.v36i11.21487

Keywords:

AI Evaluation, Robustness, Conditional Probability Estimators, Machine Learning

Abstract

Many present and future problems associated with artificial intelligence are not due to its limitations, but to our poor assessment of its behaviour. Our evaluation procedures produce aggregated performance metrics that lack detail and quantified uncertainty about the following question: how will an AI system, with a particular profile \pi, behave for a new problem, characterised by a particular situation \mu? Instead of just aggregating test results, we can use machine learning methods to fully capitalise on this evaluation information. In this paper, we introduce the concept of an assessor model, \hat{R}(r|\pi,\mu), a conditional probability estimator trained on test data. We discuss how these assessors can be built by using information of the full system-problem space and illustrate a broad range of applications that derive from varied inferences and aggregations from \hat{R}. Building good assessor models will change the predictive and explanatory power of AI evaluation and will lead to new research directions for building and using them. We propose accompanying every deployed AI system with its own assessor.

Downloads

Published

2022-06-28

How to Cite

Hernández-Orallo, J., Schellaert, W., & Martínez-Plumed, F. (2022). Training on the Test Set: Mapping the System-Problem Space in AI. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12256-12261. https://doi.org/10.1609/aaai.v36i11.21487