Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems

Authors

  • Carlo D'Eramo Politecnico di Milano
  • Alessandro Nuara Politecnico di Milano
  • Matteo Pirotta Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v31i1.10887

Keywords:

continuous reinforcement learning, maximum expected value, reinforcement learning

Abstract

This paper is about the estimation of the maximum expected value of an infinite set of random variables.This estimation problem is relevant in many fields, like the Reinforcement Learning (RL) one.In RL it is well known that, in some stochastic environments, a bias in the estimation error can increase step-by-step the approximation error leading to large overestimates of the true action values. Recently, some approaches have been proposed to reduce such bias in order to get better action-value estimates, but are limited to finite problems.In this paper, we leverage on the recently proposed weighted estimator and on Gaussian process regression to derive a new method that is able to natively handle infinitely many random variables.We show how these techniques can be used to face both continuous state and continuous actions RL problems.To evaluate the effectiveness of the proposed approach we perform empirical comparisons with related approaches.

Downloads

Published

2017-02-13

How to Cite

D’Eramo, C., Nuara, A., Pirotta, M., & Restelli, M. (2017). Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10887