What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning
DOI:
https://doi.org/10.1609/aaai.v38i13.29409Keywords:
ML: Neuro-Symbolic Learning, PEAI: Philosophical Foundations of AI, KRR: Reasoning with Beliefs, ML: Transparent, Interpretable, Explainable ML, KRR: Nonmonotonic ReasoningAbstract
This paper is a contribution to neural network semantics, a foundational framework for neuro-symbolic AI. The key insight of this theory is that logical operators can be mapped to operators on neural network states. In this paper, we do this for a neural network learning operator. We map a dynamic operator [φ] to iterated Hebbian learning, a simple learning policy that updates a neural network by repeatedly applying Hebb's learning rule until the net reaches a fixed-point. Our main result is that we can "translate away" [φ]-formulas via reduction axioms. This means that completeness for the logic of iterated Hebbian learning follows from completeness of the base logic. These reduction axioms also provide (1) a human-interpretable description of iterated Hebbian learning as a kind of plausibility upgrade, and (2) an approach to building neural networks with guarantees on what they can learn.Downloads
Published
2024-03-24
How to Cite
Schultz Kisby, C., Blanco, S. A., & Moss, L. S. (2024). What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14894-14901. https://doi.org/10.1609/aaai.v38i13.29409
Issue
Section
AAAI Technical Track on Machine Learning IV