Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles


  • Siddhartha Jain MIT
  • Ge Liu MIT
  • Jonas Mueller Amazon Web Services
  • David Gifford MIT



The inaccuracy of neural network models on inputs that do not stem from the distribution underlying the training data is problematic and at times unrecognized. Uncertainty estimates of model predictions are often based on the variation in predictions produced by a diverse ensemble of models applied to the same input. Here we describe Maximize Overall Diversity (MOD), an approach to improve ensemble-based uncertainty estimates by encouraging larger overall diversity in ensemble predictions across all possible inputs. We apply MOD to regression tasks including 38 Protein-DNA binding datasets, 9 UCI datasets, and the IMDB-Wiki image dataset. We also explore variants that utilize adversarial training techniques and data density estimation. For out-of-distribution test examples, MOD significantly improves predictive performance and uncertainty calibration without sacrificing performance on test data drawn from same distribution as the training data. We also find that in Bayesian optimization tasks, the performance of UCB acquisition is improved via MOD uncertainty estimates.




How to Cite

Jain, S., Liu, G., Mueller, J., & Gifford, D. (2020). Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4264-4271.



AAAI Technical Track: Machine Learning