Robust Uncertainty Quantification Using Conformalised Monte Carlo Prediction

Authors

  • Daniel Bethell University of York
  • Simos Gerasimou University of York
  • Radu Calinescu University of York

DOI:

https://doi.org/10.1609/aaai.v38i19.30084

Keywords:

General

Abstract

Deploying deep learning models in safety-critical applications remains a very challenging task, mandating the provision of assurances for the dependable operation of these models. Uncertainty quantification (UQ) methods estimate the model’s confidence per prediction, informing decision-making by considering the effect of randomness and model misspecification. Despite the advances of state-of-the-art UQ methods, they are computationally expensive or produce conservative prediction sets/intervals. We introduce MC-CP, a novel hybrid UQ method that combines a new adaptive Monte Carlo (MC) dropout method with conformal prediction (CP). MC-CP adaptively modulates the traditional MC dropout at runtime to save memory and computation resources, enabling predictions to be consumed by CP, yielding robust prediction sets/intervals. Throughout comprehensive experiments, we show that MC-CP delivers significant improvements over comparable UQ methods, like MC dropout, RAPS and CQR, both in classification and regression benchmarks. MC-CP can be easily added to existing models, making its deployment simple. The MC-CP code and replication package is available at https://github.com/team-daniel/MC-CP.

Published

2024-03-24

How to Cite

Bethell, D., Gerasimou, S., & Calinescu, R. (2024). Robust Uncertainty Quantification Using Conformalised Monte Carlo Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 20939-20948. https://doi.org/10.1609/aaai.v38i19.30084

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track