Variable-Based Calibration for Machine Learning Classifiers


  • Markelle Kelly University of California, Irvine
  • Padhraic Smyth University of California, Irvine



ML: Calibration & Uncertainty Quantification, ML: Bias and Fairness


The deployment of machine learning classifiers in high-stakes domains requires well-calibrated confidence scores for model predictions. In this paper we introduce the notion of variable-based calibration to characterize calibration properties of a model with respect to a variable of interest, generalizing traditional score-based metrics such as expected calibration error (ECE). In particular, we find that models with near-perfect ECE can exhibit significant miscalibration as a function of features of the data. We demonstrate this phenomenon both theoretically and in practice on multiple well-known datasets, and show that it can persist after the application of existing calibration methods. To mitigate this issue, we propose strategies for detection, visualization, and quantification of variable-based calibration error. We then examine the limitations of current score-based calibration methods and explore potential modifications. Finally, we discuss the implications of these findings, emphasizing that an understanding of calibration beyond simple aggregate measures is crucial for endeavors such as fairness and model interpretability.




How to Cite

Kelly, M., & Smyth, P. (2023). Variable-Based Calibration for Machine Learning Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8211-8219.



AAAI Technical Track on Machine Learning II