Safe Policy Improvement with Baseline Bootstrapping in Factored Environments

Authors

  • Thiago D. Simão Delft University of Technology
  • Matthijs T. J. Spaan Delft University of Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33014967

Abstract

We present a novel safe reinforcement learning algorithm that exploits the factored dynamics of the environment to become less conservative. We focus on problem settings in which a policy is already running and the interaction with the environment is limited. In order to safely deploy an updated policy, it is necessary to provide a confidence level regarding its expected performance. However, algorithms for safe policy improvement might require a large number of past experiences to become confident enough to change the agent’s behavior. Factored reinforcement learning, on the other hand, is known to make good use of the data provided. It can achieve a better sample complexity by exploiting independence between features of the environment, but it lacks a confidence level. We study how to improve the sample efficiency of the safe policy improvement with baseline bootstrapping algorithm by exploiting the factored structure of the environment. Our main result is a theoretical bound that is linear in the number of parameters of the factored representation instead of the number of states. The empirical analysis shows that our method can improve the policy using a number of samples potentially one order of magnitude smaller than the flat algorithm.

Downloads

Published

2019-07-17

How to Cite

Simão, T. D., & Spaan, M. T. J. (2019). Safe Policy Improvement with Baseline Bootstrapping in Factored Environments. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4967-4974. https://doi.org/10.1609/aaai.v33i01.33014967

Issue

Section

AAAI Technical Track: Machine Learning