Deep Reinforcement Learning with Time-Scale Invariant Memory

Authors

  • Md Rysul Kabir Department of Computer Science, Indiana University Bloomington
  • James Mochizuki-Freeman Department of Computer Science, Indiana University Bloomington
  • Zoran Tiganj Department of Computer Science, Indiana University Bloomington

DOI:

https://doi.org/10.1609/aaai.v39i2.32124

Abstract

The ability to estimate temporal relationships is critical for both animals and artificial agents. Cognitive science and neuroscience provide remarkable insights into behavioral and neural aspects of temporal credit assignment. In particular, scale invariance of learning dynamics, observed in behavior and supported by neural data, is one of the key principles that governs animal perception: proportional rescaling of temporal relationships does not alter the overall learning efficiency. Here we integrate a computational neuroscience model of scale invariant memory into deep reinforcement learning (RL) agents. We first provide a theoretical analysis and then demonstrate through experiments that such agents can learn robustly across a wide range of temporal scales, unlike agents built with commonly used recurrent memory architectures such as LSTM. This result illustrates that incorporating computational principles from neuroscience and cognitive science into deep neural networks can enhance adaptability to complex temporal dynamics, mirroring some of the core properties of human learning.

Published

2025-04-11

How to Cite

Kabir, M. R., Mochizuki-Freeman, J., & Tiganj, Z. (2025). Deep Reinforcement Learning with Time-Scale Invariant Memory. Proceedings of the AAAI Conference on Artificial Intelligence, 39(2), 1345–1354. https://doi.org/10.1609/aaai.v39i2.32124

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems