Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs


  • Orfeas Menis Mastromichalakis National Technical University of Athens
  • Edmund Dervakos National Technical University of Athens
  • Alexandros Chortaras National Technical University of Athens
  • Giorgos Stamou National Technical University of Athens



XAI, Explainability, Knowledge Graphs, Rule-based, Explanations


The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user.






Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge