DIBS: Diversity Inducing Information Bottleneck in Model Ensembles

Authors

  • Samarth Sinha University of Toronto Vector Institute
  • Homanga Bharadhwaj University of Toronto Vector Institute
  • Anirudh Goyal University of Montreal
  • Hugo Larochelle Google
  • Animesh Garg University of Toronto Vector Institute
  • Florian Shkurti University of Toronto Vector Institute

DOI:

https://doi.org/10.1609/aaai.v35i11.17163

Keywords:

Ensemble Methods, (Deep) Neural Network Algorithms

Abstract

Although deep learning models have achieved state-of-the art performance on a number of vision tasks, generalization over high dimensional multi-modal data, and reliable predictive uncertainty estimation are still active areas of research. Bayesian approaches including Bayesian Neural Nets (BNNs) do not scale well to modern computer vision tasks, as they are difficult to train, and have poor generalization under dataset-shift. This motivates the need for effective ensembles which can generalize and give reliable uncertainty estimates. In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction. We explicitly optimize a diversity inducing adversarial loss for learning the stochastic latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data. We evaluate our method on benchmark datasets: MNIST, CIFAR100, TinyImageNet and MIT Places 2, and compared to the most competitive baselines show significant improvements in classification accuracy, under a shift in the data distribution and in out-of-distribution detection. over 10% relative improvement in classification accuracy, over 5% relative improvement in generalizing under dataset shift, and over 5% better predictive uncertainty estimation as inferred by efficient out-of-distribution (OOD) detection.

Downloads

Published

2021-05-18

How to Cite

Sinha, S., Bharadhwaj, H., Goyal, A., Larochelle, H., Garg, A., & Shkurti, F. (2021). DIBS: Diversity Inducing Information Bottleneck in Model Ensembles. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9666-9674. https://doi.org/10.1609/aaai.v35i11.17163

Issue

Section

AAAI Technical Track on Machine Learning IV