Regional Tree Regularization for Interpretability in Deep Neural Networks

Authors

  • Mike Wu Stanford University
  • Sonali Parbhoo Harvard University
  • Michael Hughes Tufts University
  • Ryan Kindle Massachusetts General Hospital
  • Leo Celi MIT
  • Maurizio Zazzi University of Siena
  • Volker Roth University of Basel
  • Finale Doshi-Velez Harvard University

DOI:

https://doi.org/10.1609/aaai.v34i04.6112

Abstract

The lack of interpretability remains a barrier to adopting deep neural networks across many safety-critical domains. Tree regularization was recently proposed to encourage a deep neural network's decisions to resemble those of a globally compact, axis-aligned decision tree. However, it is often unreasonable to expect a single tree to predict well across all possible inputs. In practice, doing so could lead to neither interpretable nor performant optima. To address this issue, we propose regional tree regularization – a method that encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Across many datasets, including two healthcare applications, we show our approach delivers simpler explanations than other regularization schemes without compromising accuracy. Specifically, our regional regularizer finds many more “desirable” optima compared to global analogues.

Downloads

Published

2020-04-03

How to Cite

Wu, M., Parbhoo, S., Hughes, M., Kindle, R., Celi, L., Zazzi, M., Roth, V., & Doshi-Velez, F. (2020). Regional Tree Regularization for Interpretability in Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6413-6421. https://doi.org/10.1609/aaai.v34i04.6112

Issue

Section

AAAI Technical Track: Machine Learning