Expected Eligibility Traces

Authors

  • Hado van Hasselt DeepMind
  • Sephora Madjiheurem University College London
  • Matteo Hessel DeepMind
  • David Silver DeepMind
  • André Barreto DeepMind
  • Diana Borsa DeepMind

Keywords:

Reinforcement Learning

Abstract

The question of how to determine which states and actions are responsible for a certain outcome is known as the credit assignment problem and remains a central research question in reinforcement learning and artificial intelligence. Eligibility traces enable efficient credit assignment to the recent sequence of states and actions experienced by the agent, but not to counterfactual sequences that could also have led to the current state. In this work, we introduce expected eligibility traces. Expected traces allow, with a single update, to update states and actions that could have preceded the current state, even if they did not do so on this occasion. We discuss when expected traces provide benefits over classic (instantaneous) traces in temporal-difference learning, and show that some- times substantial improvements can be attained. We provide a way to smoothly interpolate between instantaneous and expected traces by a mechanism similar to bootstrapping, which ensures that the resulting algorithm is a strict generalisation of TD(λ). Finally, we discuss possible extensions and connections to related ideas, such as successor features.

Downloads

Published

2021-05-18

How to Cite

van Hasselt, H., Madjiheurem, S., Hessel, M., Silver, D., Barreto, A., & Borsa, D. (2021). Expected Eligibility Traces. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9997-10005. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17200

Issue

Section

AAAI Technical Track on Machine Learning IV