Uncertainty Quantification in CNN Through the Bootstrap of Convex Neural Networks
Keywords:Other Foundations of Reasoning under Uncertainty
AbstractDespite the popularity of Convolutional Neural Networks (CNN), the problem of uncertainty quantification (UQ) of CNN has been largely overlooked. Lack of efficient UQ tools severely limits the application of CNN in certain areas, such as medicine, where prediction uncertainty is critically important. Among the few existing UQ approaches that have been proposed for deep learning, none of them has theoretical consistency that can guarantee the uncertainty quality. To address this issue, we propose a novel bootstrap based framework for the estimation of prediction uncertainty. The inference procedure we use relies on convexified neural networks to establish the theoretical consistency of bootstrap. Our approach has a significantly less computational load than its competitors, as it relies on warm-starts at each bootstrap that avoids refitting the model from scratch. We further explore a novel transfer learning method so our framework can work on arbitrary neural networks. We experimentally demonstrate our approach has a much better performance compared to other baseline CNNs and state-of-the-art methods on various image datasets.
How to Cite
Du, H., Barut, E., & Jin, F. (2021). Uncertainty Quantification in CNN Through the Bootstrap of Convex Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 12078-12085. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17434
AAAI Technical Track on Reasoning under Uncertainty