Modeling Patterns for Neural-Symbolic Reasoning Using Energy-based Models

Authors

  • Charles Dickens University of California, Santa Cruz
  • Connor Pryor University of California, Santa Cruz
  • Lise Getoor University of California, Santa Cruz

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31187

Keywords:

Neural-Symbolic AI, Energy-Based Models, Large Language Models, Machine Learning

Abstract

Neural-symbolic (NeSy) AI strives to empower machine learning and large language models with fast, reliable predictions that exhibit commonsense and trustworthy reasoning by seamlessly integrating neural and symbolic methods. With such a broad scope, several taxonomies have been proposed to categorize this integration, emphasizing knowledge representation, reasoning algorithms, and applications. We introduce a knowledge representation-agnostic taxonomy focusing on the neural-symbolic interface capturing methods that reason with probability, logic, and arithmetic constraints. Moreover, we derive expressions for gradients of a prominent class of learning losses and a formalization of reasoning and learning. Through a rigorous empirical analysis spanning three tasks, we show NeSy approaches reach up to a 37% improvement over neural baselines in a semi-supervised setting and a 19% improvement over GPT-4 on question-answering.

Downloads

Published

2024-05-20

Issue

Section

Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge