Aligning Artificial Neural Networks and Ontologies towards Explainable AI

Authors

  • Manuel de Sousa Ribeiro NOVA University Lisbon
  • João Leite NOVA University Lisbon

DOI:

https://doi.org/10.1609/aaai.v35i6.16626

Keywords:

Neuro-Symbolic AI (NSAI), (Deep) Neural Network Learning Theory, Knowledge Representation Languages

Abstract

Neural networks have been the key to solve a variety of different problems. However, neural network models are still regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain result. We address this issue by leveraging on ontologies and building small classifiers that map a neural network model's internal state to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural network models. Using an image classification problem as testing ground, we discuss how to map the internal state of a neural network to the concepts of an ontology, examine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.

Downloads

Published

2021-05-18

How to Cite

de Sousa Ribeiro, M., & Leite, J. (2021). Aligning Artificial Neural Networks and Ontologies towards Explainable AI. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6), 4932-4940. https://doi.org/10.1609/aaai.v35i6.16626

Issue

Section

AAAI Technical Track Focus Area on Neuro-Symbolic AI