Deterministic Policy Optimization by Combining Pathwise and Score Function Estimators for Discrete Action Spaces

Authors

  • Daniel Levy Stanford University
  • Stefano Ermon Stanford University, Woods Institute for the Environment

Keywords:

reinforcement, learning, deep, discrete, action, spaces, sample, efficient, continuous, relaxation

Abstract

Policy optimization methods have shown great promise in solving complex reinforcement and imitation learning tasks. While model-free methods are broadly applicable, they often require many samples to optimize complex policies. Model-based methods greatly improve sample-efficiency but at the cost of poor generalization, requiring a carefully handcrafted model of the system dynamics for each task. Recently, hybrid methods have been successful in trading off applicability for improved sample-complexity. However, these have been limited to continuous action spaces. In this work, we present a new hybrid method based on an approximation of the dynamics as an expectation over the next state under the current policy. This relaxation allows us to derive a novel hybrid policy gradient estimator, combining score function and pathwise derivative estimators, that is applicable to discrete action spaces. We show significant gains in sample complexity, ranging between 1.7 and 25 times, when learning parameterized policies on Cart Pole, Acrobot, Mountain Car and Hand Mass. Our method is applicable to both discrete and continuous action spaces, when competing pathwise methods are limited to the latter.

Downloads

Published

2018-04-29

How to Cite

Levy, D., & Ermon, S. (2018). Deterministic Policy Optimization by Combining Pathwise and Score Function Estimators for Discrete Action Spaces. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11822