Uncertainty-Aware Deep Classifiers Using Generative Models

Authors

  • Murat Sensoy Blue Prism AI Labs
  • Lance Kaplan US Army Research Lab
  • Federico Cerutti University of Brescia
  • Maryam Saleki Ozyegin University

DOI:

https://doi.org/10.1609/aaai.v34i04.6015

Abstract

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.

Downloads

Published

2020-04-03

How to Cite

Sensoy, M., Kaplan, L., Cerutti, F., & Saleki, M. (2020). Uncertainty-Aware Deep Classifiers Using Generative Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5620-5627. https://doi.org/10.1609/aaai.v34i04.6015

Issue

Section

AAAI Technical Track: Machine Learning