Neural Reasoning Networks: Efficient Interpretable Neural Networks with Automatic Textual Explanations

Authors

  • Stephen Carrow International Business Machines
  • Kyle Erwin International Business Machines
  • Olga Vilenskaia International Business Machines
  • Parikshit Ram International Business Machines
  • Tim Klinger International Business Machines
  • Naweed Khan International Business Machines
  • Ndivhuwo Makondo International Business Machines
  • Alexander G. Gray Centaur AI Institute

DOI:

https://doi.org/10.1609/aaai.v39i15.33720

Abstract

Recent advances in machine learning have led to a surge in adoption of neural networks for various tasks, but lack of interpretability remains an issue for many others in which an understanding of the features influencing the prediction is necessary to ensure fairness, safety, and legal compliance. In this paper we consider one class of such tasks, tabular dataset classification, and propose a novel neuro-symbolic architecture, Neural Reasoning Networks (NRN), that is scalable and generates logically sound textual explanations for its predictions. NRNs are connected layers of logical neurons that implement a form of real valued logic. A training algorithm (R-NRN) learns the weights of the network as usual using gradient descent optimization with backprop, but also learns the network structure itself using a bandit-based optimization. Both are implemented in an extension to PyTorch that takes full advantage of GPU scaling and batched training. Evaluation on a diverse set of 22 open-source datasets for tabular classification demonstrates performance (measured by ROC AUC) which improves over Multilayer Perceptron (MLP) and is statistically similar to other state-of-the-art approaches such as Random Forest, XGBoost and Gradient Boosted Trees, while offering 43% faster training and a more than 2 orders of magnitude reduction in the number of parameters required, on average. Furthermore, R-NRN explanations are shorter than the compared approaches while producing more accurate feature importance scores.

Downloads

Published

2025-04-11

How to Cite

Carrow, S., Erwin, K., Vilenskaia, O., Ram, P., Klinger, T., Khan, N., … Gray, A. G. (2025). Neural Reasoning Networks: Efficient Interpretable Neural Networks with Automatic Textual Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 15669–15677. https://doi.org/10.1609/aaai.v39i15.33720

Issue

Section

AAAI Technical Track on Machine Learning I