Neurosymbolic Reasoning and Learning with Restricted Boltzmann Machines


  • Son N. Tran The University of Tasmania
  • Artur d'Avila Garcez City University of London



KRR: Other Foundations of Knowledge Representation & Reasoning, KRR: Applications, ML: Deep Neural Architectures, ML: Relational Learning, ML: Transparent, Interpretable, Explainable ML


Knowledge representation and reasoning in neural networks has been a long-standing endeavour which has attracted much attention recently. The principled integration of reasoning and learning in neural networks is a main objective of the area of neurosymbolic Artificial Intelligence. In this paper, a neurosymbolic system is introduced that can represent any propositional logic formula. A proof of equivalence is presented showing that energy minimization in restricted Boltzmann machines corresponds to logical reasoning. We demonstrate the application of our approach empirically on logical reasoning and learning from data and knowledge. Experimental results show that reasoning can be performed effectively for a class of logical formulae. Learning from data and knowledge is also evaluated in comparison with learning of logic programs using neural networks. The results show that our approach can improve on state-of-the-art neurosymbolic systems. The theorems and empirical results presented in this paper are expected to reignite the research on the use of neural networks as massively-parallel models for logical reasoning and promote the principled integration of reasoning and learning in deep networks.




How to Cite

Tran, S. N., & Garcez, A. d’Avila. (2023). Neurosymbolic Reasoning and Learning with Restricted Boltzmann Machines. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 6558-6565.



AAAI Technical Track on Knowledge Representation and Reasoning