Approximating the Shapley Value without Marginal Contributions

Authors

  • Patrick Kolpaczki Paderborn University
  • Viktor Bengs Institute of Informatics, University of Munich (LMU) Munich Center for Machine Learning
  • Maximilian Muschalik Institute of Informatics, University of Munich (LMU) Munich Center for Machine Learning
  • Eyke Hüllermeier Institute of Informatics, University of Munich (LMU) Munich Center for Machine Learning

DOI:

https://doi.org/10.1609/aaai.v38i12.29225

Keywords:

ML: Transparent, Interpretable, Explainable ML, GTEP: Cooperative Game Theory

Abstract

The Shapley value, which is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, has recently been used intensively in explainable artificial intelligence. Its meaningfulness is due to axiomatic properties that only the Shapley value satisfies, which, however, comes at the expense of an exact computation growing exponentially with the number of agents. Accordingly, a number of works are devoted to the efficient approximation of the Shapley value, most of them revolve around the notion of an agent's marginal contribution. In this paper, we propose with SVARM and Stratified SVARM two parameter-free and domain-independent approximation algorithms based on a representation of the Shapley value detached from the notion of marginal contribution. We prove unmatched theoretical guarantees regarding their approximation quality and provide empirical results including synthetic games as well as common explainability use cases comparing ourselves with state-of-the-art methods.

Published

2024-03-24

How to Cite

Kolpaczki, P., Bengs, V., Muschalik, M., & Hüllermeier, E. (2024). Approximating the Shapley Value without Marginal Contributions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13246-13255. https://doi.org/10.1609/aaai.v38i12.29225

Issue

Section

AAAI Technical Track on Machine Learning III