Ensembles of Locally Independent Prediction Models

Authors

  • Andrew Ross Harvard
  • Weiwei Pan Harvard
  • Leo Celi MIT
  • Finale Doshi-Velez Harvard

DOI:

https://doi.org/10.1609/aaai.v34i04.6004

Abstract

Ensembles depend on diversity for improved performance. Many ensemble training methods, therefore, attempt to optimize for diversity, which they almost always define in terms of differences in training set predictions. In this paper, however, we demonstrate the diversity of predictions on the training set does not necessarily imply diversity under mild covariate shift, which can harm generalization in practical settings. To address this issue, we introduce a new diversity metric and associated method of training ensembles of models that extrapolate differently on local patches of the data manifold. Across a variety of synthetic and real-world tasks, we find that our method improves generalization and diversity in qualitatively novel ways, especially under data limits and covariate shift.

Downloads

Published

2020-04-03

How to Cite

Ross, A., Pan, W., Celi, L., & Doshi-Velez, F. (2020). Ensembles of Locally Independent Prediction Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5527-5536. https://doi.org/10.1609/aaai.v34i04.6004

Issue

Section

AAAI Technical Track: Machine Learning