Understanding and Improving Optimization in Predictive Coding Networks

Authors

  • Nicholas Alonso Department of Cognitive Science, University of California, Irvine
  • Jeffrey Krichmar Department of Cognitive Science, University of California, Irvine Department of Computer Science, University of California, Irvine
  • Emre Neftci Electrical Engineering and Information Technology RWTH Aachen, Germany Peter Grünberg Institute, Forschungszentrum Jülich, Germany

DOI:

https://doi.org/10.1609/aaai.v38i10.28954

Keywords:

ML: Bio-inspired Learning, ML: Deep Learning Algorithms

Abstract

Backpropagation (BP), the standard learning algorithm for artificial neural networks, is often considered biologically implausible. In contrast, the standard learning algorithm for predictive coding (PC) models in neuroscience, known as the inference learning algorithm (IL), is a promising, bio-plausible alternative. However, several challenges and questions hinder IL's application to real-world problems. For example, IL is computationally demanding, and without memory-intensive optimizers like Adam, IL may converge to poor local minima. Moreover, although IL can reduce loss more quickly than BP, the reasons for these speedups or their robustness remains unclear. In this paper, we tackle these challenges by 1) altering the standard implementation of PC circuits to substantially reduce computation, 2) developing a novel optimizer that improves the convergence of IL without increasing memory usage, and 3) establishing theoretical results that help elucidate the conditions under which IL is sensitive to second and higher-order information.

Published

2024-03-24

How to Cite

Alonso, N., Krichmar, J., & Neftci, E. (2024). Understanding and Improving Optimization in Predictive Coding Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 10812-10820. https://doi.org/10.1609/aaai.v38i10.28954

Issue

Section

AAAI Technical Track on Machine Learning I