Actionable Model-Centric Explanations (Student Abstract)

Authors

  • Cecilia G. Morales Carnegie Mellon University
  • Nicholas Gisolfi Carnegie Mellon University
  • Robert Edman Carnegie Mellon University
  • James K. Milller Carnegie Mellon University
  • Artur Dubrawski Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v36i11.21646

Keywords:

Machine Learning, Automated Reasoning, Logic, Applications Of AI

Abstract

We recommend using a model-centric, Boolean Satisfiability (SAT) formalism to obtain useful explanations of trained model behavior, different and complementary to what can be gleaned from LIME and SHAP, popular data-centric explanation tools in Artificial Intelligence (AI).We compare and contrast these methods, and show that data-centric methods may yield brittle explanations of limited practical utility.The model-centric framework, however, can offer actionable insights into risks of using AI models in practice. For critical applications of AI, split-second decision making is best informed by robust explanations that are invariant to properties of data, the capability offered by model-centric frameworks.

Downloads

Published

2022-06-28

How to Cite

Morales, C. G., Gisolfi, N., Edman, R., Milller, J. K., & Dubrawski, A. (2022). Actionable Model-Centric Explanations (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13019-13020. https://doi.org/10.1609/aaai.v36i11.21646