Metrics and Continuity in Reinforcement Learning

Authors

  • Charline Le Lan University of Oxford
  • Marc G. Bellemare Google Research, Brain Team
  • Pablo Samuel Castro Google Research, Brain Team

Keywords:

Reinforcement Learning, Other Foundations of Planning, Routing & Scheduling, Representation Learning

Abstract

In most practical applications of reinforcement learning, it is untenable to maintain direct estimates for individual states; in continuous-state systems, it is impossible. Instead, researchers often leverage {\em state similarity} (whether explicitly or implicitly) to build models that can generalize well from a limited set of samples. The notion of state similarity used, and the neighbourhoods and topologies they induce, is thus of crucial importance, as it will directly affect the performance of the algorithms. Indeed, a number of recent works introduce algorithms assuming the existence of "well-behaved" neighbourhoods, but leave the full specification of such topologies for future work. In this paper we introduce a unified formalism for defining these topologies through the lens of metrics. We establish a hierarchy amongst these metrics and demonstrate their theoretical implications on the Markov Decision Process specifying the reinforcement learning problem. We complement our theoretical results with empirical evaluations showcasing the differences between the metrics considered.

Downloads

Published

2021-05-18

How to Cite

Le Lan, C., G. Bellemare, M., & Castro, P. S. (2021). Metrics and Continuity in Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8261-8269. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17005

Issue

Section

AAAI Technical Track on Machine Learning II