Expected Policy Gradients

Authors

  • Kamil Ciosek University of Oxford
  • Shimon Whiteson University of Oxford

DOI:

https://doi.org/10.1609/aaai.v32i1.11607

Keywords:

Reinforcement Learning, MDPs, Actor-Critic, Policy Gradients

Abstract

We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates across the action when estimating the gradient, instead of relying only on the action in the sampled trajectory. We establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. We also prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and, for the Gaussian case, with no computational overhead. Finally, we show that it is optimal in a certain sense to explore with a Gaussian policy such that the covariance is proportional to the exponential of the scaled Hessian of the critic with respect to the actions. We present empirical results confirming that this new form of exploration substantially outperforms DPG with the Ornstein-Uhlenbeck heuristic in four challenging MuJoCo domains.

Downloads

Published

2018-04-29

How to Cite

Ciosek, K., & Whiteson, S. (2018). Expected Policy Gradients. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11607