ProtGNN: Towards Self-Explaining Graph Neural Networks

Authors

  • Zaixi Zhang University of Science and Technology of China
  • Qi Liu University of Science and Technology of China
  • Hao Wang University of Science and Technology of China
  • Chengqiang Lu University of Science and Technology of China
  • Cheekong Lee Tencent America

DOI:

https://doi.org/10.1609/aaai.v36i8.20898

Keywords:

Machine Learning (ML)

Abstract

Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-hoc methods fail to reveal the original reasoning process of GNNs raises the need of building GNNs with built-in interpretability. In this work, we propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs and provides a new perspective on the explanations of GNNs. In ProtGNN, the explanations are naturally derived from the case-based reasoning process and are actually used during classification. The prediction of ProtGNN is obtained by comparing the inputs to a few learned prototypes in the latent space. Furthermore, for better interpretability and higher efficiency, a novel conditional subgraph sampling module is incorporated to indicate which part of the input graph is most similar to each prototype in ProtGNN+. Finally, we evaluate our method on a wide range of datasets and perform concrete case studies. Extensive results show that ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.

Downloads

Published

2022-06-28

How to Cite

Zhang, Z., Liu, Q., Wang, H., Lu, C., & Lee, C. (2022). ProtGNN: Towards Self-Explaining Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9127-9135. https://doi.org/10.1609/aaai.v36i8.20898

Issue

Section

AAAI Technical Track on Machine Learning III