Eliminating the Impossible, Whatever Remains Must Be True: On Extracting and Applying Background Knowledge in the Context of Formal Explanations
DOI:
https://doi.org/10.1609/aaai.v37i4.25528Keywords:
CSO: Satisfiability, CSO: Applications, DMKM: Applications, DMKM: Rule Mining & Pattern Mining, KRR: Applications, KRR: Automated Reasoning and Theorem Proving, ML: Transparent, Interpretable, Explainable MLAbstract
The rise of AI methods to make predictions and decisions has led to a pressing need for more explainable artificial intelligence (XAI) methods. One common approach for XAI is to produce a post-hoc explanation, explaining why a black box ML model made a certain prediction. Formal approaches to post-hoc explanations provide succinct reasons for why a prediction was made, as well as why not another prediction was made. But these approaches assume that features are independent and uniformly distributed. While this means that “why” explanations are correct, they may be longer than required. It also means the “why not” explanations may be suspect as the counterexamples they rely on may not be meaningful. In this paper, we show how one can apply background knowledge to give more succinct “why” formal explanations, that are presumably easier to interpret by humans, and give more accurate “why not” explanations. In addition, we show how to use existing rule induction techniques to efficiently extract background information from a dataset.Downloads
Published
2023-06-26
How to Cite
Yu, J., Ignatiev, A., Stuckey, P. J., Narodytska, N., & Marques-Silva, J. (2023). Eliminating the Impossible, Whatever Remains Must Be True: On Extracting and Applying Background Knowledge in the Context of Formal Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4123-4131. https://doi.org/10.1609/aaai.v37i4.25528
Issue
Section
AAAI Technical Track on Constraint Satisfaction and Optimization