Neural Reasoning for Sure Through Constructing Explainable Models
DOI:
https://doi.org/10.1609/aaai.v39i11.33262Abstract
Neural networks remain black-box systems, unsure about their outputs, and their performance may drop unpredictably in real applications. An open question is how to qualitatively extend neural networks, so that they are sure about their reasoning results, or reasoning-for-sure. Here, we introduce set-theoretic relations explicitly and seamlessly into neural networks by extending vector embedding into sphere embedding, so that part-whole relations can explicitly encode set-theoretic relations through sphere boundaries in the vector space. A reasoning-for-sure neural network successfully constructs, within a constant number M of epochs, a sphere configuration as its semantic model for any consistent set-theoretic relation. We implement Hyperbolic Sphere Neural Network (HSphNN), the first reasoning-for-sure neural network for all types of Aristotelian syllogistic reasoning. Its construction process is realised as a sequence of neighbourhood transitions from the current towards the target configuration. We prove M=1 for HSphNN. In experiments, HSphNN achieves the symbolic level rigour of syllogistic reasoning and successfully checks both decisions and explanations of ChatGPT (gpt-3.5-turbo and gpt-4o) without errors. Through prompts, HSphNN improves the performance of gpt-3.5-turbo from 46.875% to 58.98%, and of gpt-4o from 82.42% to 84.76%. We show ways to extend HSphNN for various kinds of logical and Bayesian reasoning, and to integrate it with traditional neural networks seamlessly.Downloads
Published
2025-04-11
How to Cite
Dong, T., Jamnik, M., & Liò, P. (2025). Neural Reasoning for Sure Through Constructing Explainable Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(11), 11598-11606. https://doi.org/10.1609/aaai.v39i11.33262
Issue
Section
AAAI Technical Track on Data Mining & Knowledge Management I