Uncertainty Quantification for Machine Learning: One Size Does Not Fit All

Authors

  • Paul Hofman LMU Munich Munich Center for Machine Learning (MCML)
  • Yusuf Sale LMU Munich Munich Center for Machine Learning (MCML)
  • Eyke Hüllermeier LMU Munich Munich Center for Machine Learning (MCML) German Research Center for Artificial Intelligence (DFKI, DSA)

DOI:

https://doi.org/10.1609/aaai.v40i26.39323

Abstract

Proper quantification of predictive uncertainty is essential for the use of machine learning in safety-critical applications. Various uncertainty measures have been proposed for this purpose, typically claiming superiority over other measures. In this paper, we argue that there is no single best measure. Instead, uncertainty quantification should be tailored to the specific application. To this end, we use a flexible family of uncertainty measures that distinguishes between total, aleatoric, and epistemic uncertainty of second-order distributions. These measures can be instantiated with specific loss functions, so-called proper scoring rules, to control their characteristics, and we show that different characteristics are useful for different tasks. In particular, we show that, for the task of selective prediction, the scoring rule should ideally match the task loss. On the other hand, for out-of-distribution detection, our results confirm that mutual information, a widely used measure of epistemic uncertainty, performs best. Furthermore, in an active learning setting, epistemic uncertainty based on zero-one loss is shown to consistently outperform other uncertainty measures.

Downloads

Published

2026-03-14

How to Cite

Hofman, P., Sale, Y., & Hüllermeier, E. (2026). Uncertainty Quantification for Machine Learning: One Size Does Not Fit All. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21726–21734. https://doi.org/10.1609/aaai.v40i26.39323

Issue

Section

AAAI Technical Track on Machine Learning III