Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors

Authors

  • Jayaraman J. Thiagarajan Lawrence Livermore National Labs
  • Bindya Venkatesh Arizona State University
  • Prasanna Sattigeri IBM Research AI
  • Peer-Timo Bremer Lawrence Livermore National Labs

DOI:

https://doi.org/10.1609/aaai.v34i04.6062

Abstract

With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model's behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, existing techniques for obtaining prediction intervals are known to produce unsatisfactory results in at least one of these criteria. To address this challenge, we develop a novel approach for building calibrated estimators. More specifically, we use separate models for prediction and interval estimation, and pose a bi-level optimization problem that allows the former to leverage estimates from the latter through an uncertainty matching strategy. Using experiments in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods, both in terms of model fidelity and calibration error.

Downloads

Published

2020-04-03

How to Cite

Thiagarajan, J. J., Venkatesh, B., Sattigeri, P., & Bremer, P.-T. (2020). Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6005-6012. https://doi.org/10.1609/aaai.v34i04.6062

Issue

Section

AAAI Technical Track: Machine Learning