Factorized Explainer for Graph Neural Networks

Authors

  • Rundong Huang Technical University of Munich, Munich, Germany
  • Farhad Shirani Florida International University, Miami, U.S.
  • Dongsheng Luo Florida International University, Miami, U.S.

DOI:

https://doi.org/10.1609/aaai.v38i11.29157

Keywords:

ML: Graph-based Machine Learning

Abstract

Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data. To open the black-box of these deep learning models, post-hoc instance-level explanation methods have been proposed to understand GNN predictions. These methods seek to discover substructures that explain the prediction behavior of a trained GNN. In this paper, we show analytically that for a large class of explanation tasks, conventional approaches, which are based on the principle of graph information bottleneck (GIB), admit trivial solutions that do not align with the notion of explainability. Instead, we argue that a modified GIB principle may be used to avoid the aforementioned trivial solutions. We further introduce a novel factorized explanation model with theoretical performance guarantees. The modified GIB is used to analyze the structural properties of the proposed factorized explainer. We conduct extensive experiments on both synthetic and real-world datasets to validate the effectiveness of our proposed factorized explainer.

Published

2024-03-24

How to Cite

Huang, R., Shirani, F., & Luo, D. (2024). Factorized Explainer for Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12626-12634. https://doi.org/10.1609/aaai.v38i11.29157

Issue

Section

AAAI Technical Track on Machine Learning II