Learning Expected Emphatic Traces for Deep RL

Authors

  • Ray Jiang Deepmind
  • Shangtong Zhang University of Oxford
  • Veronica Chelu Mila/McGill University
  • Adam White DeepMind
  • Hado van Hasselt DeepMind

DOI:

https://doi.org/10.1609/aaai.v36i6.20660

Keywords:

Machine Learning (ML)

Abstract

Off-policy sampling and experience replay are key for improving sample efficiency and scaling model-free temporal difference learning methods. When combined with function approximation, such as neural networks, this combination is known as the deadly triad and is potentially unstable. Recently, it has been shown that stability and good performance at scale can be achieved by combining emphatic weightings and multi-step updates. This approach, however, is generally limited to sampling complete trajectories in order, to compute the required emphatic weighting. In this paper we investigate how to combine emphatic weightings with non-sequential, off-line data sampled from a replay buffer. We develop a multi-step emphatic weighting that can be combined with replay, and a time-reversed n-step TD learning algorithm to learn the required emphatic weighting. We show that these state weightings reduce variance compared with prior approaches, while providing convergence guarantees. We tested the approach at scale on Atari 2600 video games, and observed that the new X-ETD(n) agent improved over baseline agents, highlighting both the scalability and broad applicability of our approach.

Downloads

Published

2022-06-28

How to Cite

Jiang, R., Zhang, S., Chelu, V., White, A., & Hasselt, H. . . van. (2022). Learning Expected Emphatic Traces for Deep RL. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 7015-7023. https://doi.org/10.1609/aaai.v36i6.20660

Issue

Section

AAAI Technical Track on Machine Learning I