Interpreting Multivariate Shapley Interactions in DNNs

Authors

  • Hao Zhang Shanghai Jiao Tong University
  • Yichen Xie Shanghai Jiao Tong University
  • Longjie Zheng Shanghai Jiao Tong University
  • Die Zhang Shanghai Jiao Tong University
  • Quanshi Zhang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v35i12.17299

Keywords:

(Deep) Neural Network Learning Theory

Abstract

This paper aims to explain deep neural networks (DNNs) from the perspective of multivariate interactions. In this paper, we define and quantify the significance of interactions among multiple input variables of the DNN. Input variables with strong interactions usually form a coalition and reflect prototype features, which are memorized and used by the DNN for inference. We define the significance of interactions based on the Shapley value, which is designed to assign the attribution value of each input variable to the inference. We have conducted experiments with various DNNs. Experimental results have demonstrated the effectiveness of the proposed method.

Downloads

Published

2021-05-18

How to Cite

Zhang, H., Xie, Y., Zheng, L., Zhang, D., & Zhang, Q. (2021). Interpreting Multivariate Shapley Interactions in DNNs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10877-10886. https://doi.org/10.1609/aaai.v35i12.17299

Issue

Section

AAAI Technical Track on Machine Learning V