Algorithmic Concept-Based Explainable Reasoning

Authors

  • Dobrik Georgiev University of Cambridge
  • Pietro Barbiero University of Cambridge
  • Dmitry Kazhdan University of Cambridge
  • Petar Veličković DeepMind
  • Pietro Lió University of Cambridge

DOI:

https://doi.org/10.1609/aaai.v36i6.20623

Keywords:

Machine Learning (ML)

Abstract

Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems. This has numerous benefits, such as allowing applications of algorithms when preconditions are not satisfied, or reusing learned models when sufficient training data is not available or can't be generated. Unfortunately, a key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly. In this work, we address this limitation by applying existing work on concept-based explanations to GNN models. We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism. Using three case studies we demonstrate that: (i) our proposed model is capable of accurately learning concepts and extracting propositional formulas based on the learned concepts for each target class; (ii) our concept-based GNN models achieve comparative performance with state-of-the-art models; (iii) we can derive global graph concepts, without explicitly providing any supervision on graph-level concepts.

Downloads

Published

2022-06-28

How to Cite

Georgiev, D., Barbiero, P., Kazhdan, D., Veličković, P., & Lió, P. (2022). Algorithmic Concept-Based Explainable Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6685-6693. https://doi.org/10.1609/aaai.v36i6.20623

Issue

Section

AAAI Technical Track on Machine Learning I