Improving Uncertainty Quantification of Deep Classifiers via Neighborhood Conformal Prediction: Novel Algorithm and Theoretical Analysis
DOI:
https://doi.org/10.1609/aaai.v37i6.25936Keywords:
ML: Calibration & Uncertainty Quantification, ML: Classification and RegressionAbstract
Safe deployment of deep neural networks in high-stake real-world applications require theoretically sound uncertainty quantification. Conformal prediction (CP) is a principled framework for uncertainty quantification of deep models in the form of prediction set for classification tasks with a user-specified coverage (i.e., true class label is contained with high probability). This paper proposes a novel algorithm referred to as Neighborhood Conformal Prediction (NCP) to improve the efficiency of uncertainty quantification from CP for deep classifiers (i.e., reduce prediction set size). The key idea behind NCP is to use the learned representation of the neural network to identify k nearest-neighbor calibration examples for a given testing input and assign them importance weights proportional to their distance to create adaptive prediction sets. We theoretically show that if the learned data representation of the neural network satisfies some mild conditions, NCP will produce smaller prediction sets than traditional CP algorithms. Our comprehensive experiments on CIFAR-10, CIFAR-100, and ImageNet datasets using diverse deep neural networks strongly demonstrate that NCP leads to significant reduction in prediction set size over prior CP methods.Downloads
Published
2023-06-26
How to Cite
Ghosh, S., Belkhouja, T., Yan, Y., & Doppa, J. R. (2023). Improving Uncertainty Quantification of Deep Classifiers via Neighborhood Conformal Prediction: Novel Algorithm and Theoretical Analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7722-7730. https://doi.org/10.1609/aaai.v37i6.25936
Issue
Section
AAAI Technical Track on Machine Learning I