Anchors: High-Precision Model-Agnostic Explanations

Authors

  • Marco Tulio Ribeiro University of Washington
  • Sameer Singh University of California, Irvine
  • Carlos Guestrin University of Washington

DOI:

https://doi.org/10.1609/aaai.v32i1.11491

Keywords:

machine learning, interpretability

Abstract

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.

Downloads

Published

2018-04-25

How to Cite

Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11491

Issue

Section

AAAI Technical Track: Human-AI Collaboration