Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example

Authors

  • Serena Booth Massachusetts Institute of Technology
  • Yilun Zhou Massachusetts Institute of Technology
  • Ankit Shah Massachusetts Institute of Technology
  • Julie Shah Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i13.17361

Keywords:

Accountability, Interpretability & Explainability, Ethics -- Bias, Fairness, Transparency & Privacy, Evaluation and Analysis (Machine Learning), Human-in-the-loop Machine Learning

Abstract

Post-hoc explanation methods are gaining popularity for interpreting, understanding, and debugging neural networks. Most analyses using such methods explain decisions in response to inputs drawn from the test set. However, the test set may have few examples that trigger some model behaviors, such as high-confidence failures or ambiguous classifications. To address these challenges, we introduce a flexible model inspection framework: Bayes-TrEx. Given a data distribution, Bayes-TrEx finds in-distribution examples which trigger a specified prediction confidence. We demonstrate several use cases of Bayes-TrEx, including revealing highly confident (mis)classifications, visualizing class boundaries via ambiguous examples, understanding novel-class extrapolation behavior, and exposing neural network overconfidence. We use Bayes-TrEx to study classifiers trained on CLEVR, MNIST, and Fashion-MNIST, and we show that this framework enables more flexible holistic model analysis than just inspecting the test set. Code and supplemental material are available at https://github.com/serenabooth/Bayes-TrEx.

Downloads

Published

2021-05-18

How to Cite

Booth, S., Zhou, Y., Shah, A., & Shah, J. (2021). Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11423-11432. https://doi.org/10.1609/aaai.v35i13.17361

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI