Backprop-Free Reinforcement Learning with Active Neural Generative Coding

Authors

  • Alexander G. Ororbia Rochester Institute of Technology
  • Ankur Mali The Pennsylvania State University

DOI:

https://doi.org/10.1609/aaai.v36i1.19876

Keywords:

Cognitive Modeling & Cognitive Systems (CMS), Machine Learning (ML)

Abstract

In humans, perceptual awareness facilitates the fast recognition and extraction of information from sensory input. This awareness largely depends on how the human agent interacts with the environment. In this work, we propose active neural generative coding, a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments. Specifically, we develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference. We demonstrate on several simple control problems that our framework performs competitively with deep Q-learning. The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.

Downloads

Published

2022-06-28

How to Cite

Ororbia, A. G., & Mali, A. (2022). Backprop-Free Reinforcement Learning with Active Neural Generative Coding. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 29-37. https://doi.org/10.1609/aaai.v36i1.19876

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems