Discerning Temporal Difference Learning

Authors

  • Jianfei Ma Northwestern Polytechnical University

DOI:

https://doi.org/10.1609/aaai.v38i13.29335

Keywords:

ML: Reinforcement Learning

Abstract

Temporal difference learning (TD) is a foundational concept in reinforcement learning (RL), aimed at efficiently assessing a policy's value function. TD(λ), a potent variant, incorporates a memory trace to distribute the prediction error into the historical context. However, this approach often neglects the significance of historical states and the relative importance of propagating the TD error, influenced by challenges such as visitation imbalance or outcome noise. To address this, we propose a novel TD algorithm named discerning TD learning (DTD), which allows flexible emphasis functions—predetermined or adapted during training—to allocate efforts effectively across states. We establish the convergence properties of our method within a specific class of emphasis functions and showcase its promising potential for adaptation to deep RL contexts. Empirical results underscore that employing a judicious emphasis function not only improves value estimation but also expedites learning across diverse scenarios.

Published

2024-03-24

How to Cite

Ma, J. (2024). Discerning Temporal Difference Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14238-14245. https://doi.org/10.1609/aaai.v38i13.29335

Issue

Section

AAAI Technical Track on Machine Learning IV