FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles
DOI:
https://doi.org/10.1609/aaai.v36i5.20468Keywords:
Humans And AI (HAI), Machine Learning (ML), Philosophy And Ethics Of AI (PEAI)Abstract
Model interpretability has become an important problem in machine learning (ML) due to the increased effect algorithmic decisions have on humans. Counterfactual explanations can help users understand not only why ML models make certain decisions, but also how these decisions can be changed. We frame the problem of finding counterfactual explanations as an optimization task and extend previous work that could only be applied to differentiable models. In order to accommodate non-differentiable models such as tree ensembles, we use probabilistic model approximations in the optimization framework. We introduce an approximation technique that is effective for finding counterfactual explanations for predictions of the original model and show that our counterfactual examples are significantly closer to the original instances than those produced by other methods specifically designed for tree ensembles.Downloads
Published
2022-06-28
How to Cite
Lucic, A., Oosterhuis, H., Haned, H., & Rijke, M. de. (2022). FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5313-5322. https://doi.org/10.1609/aaai.v36i5.20468
Issue
Section
AAAI Technical Track on Humans and AI