Shaping Up SHAP: Enhancing Stability through Layer-Wise Neighbor Selection

Authors

  • Gwladys Kelodjou Univ Rennes, Inria, CNRS, IRISA - UMR 6074, F35000 Rennes, France
  • Laurence Rozé Univ Rennes, INSA Rennes, CNRS, Inria, IRISA - UMR 6074, F35000 Rennes, France
  • Véronique Masson Univ Rennes, Inria, CNRS, IRISA - UMR 6074, F35000 Rennes, France
  • Luis Galárraga Univ Rennes, Inria, CNRS, IRISA - UMR 6074, F35000 Rennes, France
  • Romaric Gaudel Univ Rennes, Inria, CNRS, IRISA - UMR 6074, F35000 Rennes, France
  • Maurice Tchuente Sorbonne University, IRD, University of Yaoundé I, UMI 209 UMMISCO, BP 337 Yaoundé, Cameroon
  • Alexandre Termier Univ Rennes, Inria, CNRS, IRISA - UMR 6074, F35000 Rennes, France

DOI:

https://doi.org/10.1609/aaai.v38i12.29208

Keywords:

ML: Transparent, Interpretable, Explainable ML, PEAI: Safety, Robustness & Trustworthiness

Abstract

Machine learning techniques, such as deep learning and ensemble methods, are widely used in various domains due to their ability to handle complex real-world tasks. However, their black-box nature has raised multiple concerns about the fairness, trustworthiness, and transparency of computer-assisted decision-making. This has led to the emergence of local post-hoc explainability methods, which offer explanations for individual decisions made by black-box algorithms. Among these methods, Kernel SHAP is widely used due to its model-agnostic nature and its well-founded theoretical framework. Despite these strengths, Kernel SHAP suffers from high instability: different executions of the method with the same inputs can lead to significantly different explanations, which diminishes the relevance of the explanations. The contribution of this paper is two-fold. On the one hand, we show that Kernel SHAP's instability is caused by its stochastic neighbor selection procedure, which we adapt to achieve full stability without compromising explanation fidelity. On the other hand, we show that by restricting the neighbors generation to perturbations of size 1 -- which we call the coalitions of Layer 1 -- we obtain a novel feature-attribution method that is fully stable, computationally efficient, and still meaningful.

Published

2024-03-24

How to Cite

Kelodjou, G., Rozé, L., Masson, V., Galárraga, L., Gaudel, R., Tchuente, M., & Termier, A. (2024). Shaping Up SHAP: Enhancing Stability through Layer-Wise Neighbor Selection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13094-13103. https://doi.org/10.1609/aaai.v38i12.29208

Issue

Section

AAAI Technical Track on Machine Learning III