Self-Interpretable Graph Learning with Sufficient and Necessary Explanations

Authors

  • Jiale Deng Department of Computer Science and Engineering Shanghai Jiao Tong University
  • Yanyan Shen Department of Computer Science and Engineering Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v38i10.29059

Keywords:

ML: Transparent, Interpretable, Explainable ML, ML: Graph-based Machine Learning

Abstract

Self-interpretable graph learning methods provide insights to unveil the black-box nature of GNNs by providing predictions with built-in explanations. However, current works suffer from performance degradation compared to GNNs trained without built-in explanations. We argue the main reason is that they fail to generate explanations satisfying both sufficiency and necessity, and the biased explanations further hurt GNNs' performance. In this work, we propose a novel framework for generating SUfficient aNd NecessarY explanations (SUNNY-GNN for short) that benefit GNNs' predictions. The key idea is to conduct augmentations by structurally perturbing given explanations and employ a contrastive loss to guide the learning of explanations toward sufficiency and necessity directions. SUNNY-GNN introduces two coefficients to generate hard and reliable contrastive samples. We further extend SUNNY-GNN to heterogeneous graphs. Empirical results on various GNNs and real-world graphs show that SUNNY-GNN yields accurate predictions and faithful explanations, outperforming the state-of-the-art methods by improving 3.5% prediction accuracy and 13.1% explainability fidelity on average. Our code and data are available at https://github.com/SJTU-Quant/SUNNY-GNN.

Published

2024-03-24

How to Cite

Deng, J., & Shen, Y. (2024). Self-Interpretable Graph Learning with Sufficient and Necessary Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11749-11756. https://doi.org/10.1609/aaai.v38i10.29059

Issue

Section

AAAI Technical Track on Machine Learning I