TY - JOUR
AU - Ciosek, Kamil
AU - Whiteson, Shimon
PY - 2018/04/29
Y2 - 2024/06/21
TI - Expected Policy Gradients
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 32
IS - 1
SE - AAAI Technical Track: Machine Learning
DO - 10.1609/aaai.v32i1.11607
UR - https://ojs.aaai.org/index.php/AAAI/article/view/11607
SP -
AB - <p> We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates across the action when estimating the gradient, instead of relying only on the action in the sampled trajectory. We establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. We also prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and, for the Gaussian case, with no computational overhead. Finally, we show that it is optimal in a certain sense to explore with a Gaussian policy such that the covariance is proportional to the exponential of the scaled Hessian of the critic with respect to the actions. We present empirical results confirming that this new form of exploration substantially outperforms DPG with the Ornstein-Uhlenbeck heuristic in four challenging MuJoCo domains. </p>
ER -