Stratified GNN Explanations through Sufficient Expansion
DOI:
https://doi.org/10.1609/aaai.v38i11.29180Keywords:
ML: Transparent, Interpretable, Explainable MLAbstract
Explaining the decisions made by Graph Neural Networks (GNNs) is vital for establishing trust and ensuring fairness in critical applications such as medicine and science. The prevalence of hierarchical structure in real-world graphs/networks raises an important question on GNN interpretability: "On each level of the graph structure, which specific fraction imposes the highest influence over the prediction?" Currently, the prevailing two categories of methods are incapable of achieving multi-level GNN explanation due to their flat or motif-centric nature. In this work, we formulate the problem of learning multi-level explanations out of GNN models and introduce a stratified explainer module, namely STFExplainer, that utilizes the concept of sufficient expansion to generate explanations on each stratum. Specifically, we learn a higher-level subgraph generator by leveraging both hierarchical structure and GNN-encoded input features. Experiment results on both synthetic and real-world datasets demonstrate the superiority of our stratified explainer on standard interpretability tasks and metrics such as fidelity and explanation recall, with an average improvement of 11% and 8% over the best alternative on each data type. The case study on material domains also confirms the value of our approach through detected multi-level graph patterns accurately reconstructing the knowledge-based ground truth.Downloads
Published
2024-03-24
How to Cite
Ji, Y., Shi, L., Liu, Z., & Wang, G. (2024). Stratified GNN Explanations through Sufficient Expansion. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12839-12847. https://doi.org/10.1609/aaai.v38i11.29180
Issue
Section
AAAI Technical Track on Machine Learning II