Uncertainty Quantification in CNN Through the Bootstrap of Convex Neural Networks

Authors

  • Hongfei Du George Washington University
  • Emre Barut Amazon
  • Fang Jin George Washington University

Keywords:

Other Foundations of Reasoning under Uncertainty

Abstract

Despite the popularity of Convolutional Neural Networks (CNN), the problem of uncertainty quantification (UQ) of CNN has been largely overlooked. Lack of efficient UQ tools severely limits the application of CNN in certain areas, such as medicine, where prediction uncertainty is critically important. Among the few existing UQ approaches that have been proposed for deep learning, none of them has theoretical consistency that can guarantee the uncertainty quality. To address this issue, we propose a novel bootstrap based framework for the estimation of prediction uncertainty. The inference procedure we use relies on convexified neural networks to establish the theoretical consistency of bootstrap. Our approach has a significantly less computational load than its competitors, as it relies on warm-starts at each bootstrap that avoids refitting the model from scratch. We further explore a novel transfer learning method so our framework can work on arbitrary neural networks. We experimentally demonstrate our approach has a much better performance compared to other baseline CNNs and state-of-the-art methods on various image datasets.

Downloads

Published

2021-05-18

How to Cite

Du, H., Barut, E., & Jin, F. (2021). Uncertainty Quantification in CNN Through the Bootstrap of Convex Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 12078-12085. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17434

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty