Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach

Authors

  • Seojin Bang Carnegie Mellon University
  • Pengtao Xie University of California San Diego Petuum Inc.
  • Heewook Lee Arizona State University
  • Wei Wu Carnegie Mellon University
  • Eric Xing Carnegie Mellon University Petuum Inc.

DOI:

https://doi.org/10.1609/aaai.v35i13.17358

Keywords:

Accountability, Interpretability & Explainability

Abstract

Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.

Downloads

Published

2021-05-18

How to Cite

Bang, S., Xie, P., Lee, H., Wu, W., & Xing, E. (2021). Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11396-11404. https://doi.org/10.1609/aaai.v35i13.17358

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI