Simultaneously Updating All Persistence Values in Reinforcement Learning

Authors

  • Luca Sabbioni Politecnico di Milano
  • Luca Al Daire Politecnico di Milano
  • Lorenzo Bisi ML cube
  • Alberto Maria Metelli Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v37i8.26156

Keywords:

ML: Reinforcement Learning Algorithms

Abstract

In Reinforcement Learning, the performance of learning agents is highly sensitive to the choice of time discretization. Agents acting at high frequencies have the best control opportunities, along with some drawbacks, such as possible inefficient exploration and vanishing of the action advantages. The repetition of the actions, i.e., action persistence, comes into help, as it allows the agent to visit wider regions of the state space and improve the estimation of the action effects. In this work, we derive a novel operator, the All-Persistence Bellman Operator, which allows an effective use of both the low-persistence experience, by decomposition into sub-transition, and the high-persistence experience, thanks to the introduction of a suitable bootstrap procedure. In this way, we employ transitions collected at any time scale to update simultaneously the action values of the considered persistence set. We prove the contraction property of the All-Persistence Bellman Operator and, based on it, we extend classic Q-learning and DQN. After providing a study on the effects of persistence, we experimentally evaluate our approach in both tabular contexts and more challenging frameworks, including some Atari games.

Downloads

Published

2023-06-26

How to Cite

Sabbioni, L., Al Daire, L., Bisi, L., Metelli, A. M., & Restelli, M. (2023). Simultaneously Updating All Persistence Values in Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9668-9676. https://doi.org/10.1609/aaai.v37i8.26156

Issue

Section

AAAI Technical Track on Machine Learning III