Variable-Based Calibration for Machine Learning Classifiers

Authors

  • Markelle Kelly University of California, Irvine
  • Padhraic Smyth University of California, Irvine

DOI:

https://doi.org/10.1609/aaai.v37i7.25991

Keywords:

ML: Calibration & Uncertainty Quantification, ML: Bias and Fairness

Abstract

The deployment of machine learning classifiers in high-stakes domains requires well-calibrated confidence scores for model predictions. In this paper we introduce the notion of variable-based calibration to characterize calibration properties of a model with respect to a variable of interest, generalizing traditional score-based metrics such as expected calibration error (ECE). In particular, we find that models with near-perfect ECE can exhibit significant miscalibration as a function of features of the data. We demonstrate this phenomenon both theoretically and in practice on multiple well-known datasets, and show that it can persist after the application of existing calibration methods. To mitigate this issue, we propose strategies for detection, visualization, and quantification of variable-based calibration error. We then examine the limitations of current score-based calibration methods and explore potential modifications. Finally, we discuss the implications of these findings, emphasizing that an understanding of calibration beyond simple aggregate measures is crucial for endeavors such as fairness and model interpretability.

Downloads

Published

2023-06-26

How to Cite

Kelly, M., & Smyth, P. (2023). Variable-Based Calibration for Machine Learning Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8211-8219. https://doi.org/10.1609/aaai.v37i7.25991

Issue

Section

AAAI Technical Track on Machine Learning II