Explainable Shapley-Based Allocation (Student Abstract)

Authors

  • Meir Nizri Ariel University
  • Noam Hazon Ariel University
  • Amos Azaria Ariel University

DOI:

https://doi.org/10.1609/aaai.v36i11.21648

Keywords:

Shapley Value, Explainable AI, Human Perception

Abstract

The Shapley value is one of the most important normative division scheme in cooperative game theory, satisfying basic axioms. However, some allocation according to the Shapley value may seem unfair to humans. In this paper, we develop an automatic method that generates intuitive explanations for a Shapley-based payoff allocation, which utilizes the basic axioms. Given a coalitional game, our method decomposes it to sub-games, for which it is easy to generate verbal explanations, and shows that the given game is composed of the sub-games. Since the payoff allocation for each sub-game is perceived as fair, the Shapley-based payoff allocation for the given game should seem fair as well. We run an experiment with 210 human participants and show that when applying our method, humans perceive Shapley-based payoff allocation as significantly more fair than when using a general standard explanation.

Downloads

Published

2022-06-28

How to Cite

Nizri, M., Hazon, N., & Azaria, A. (2022). Explainable Shapley-Based Allocation (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13023-13024. https://doi.org/10.1609/aaai.v36i11.21648