Policy Search by Target Distribution Learning for Continuous Control

Authors

  • Chuheng Zhang Tsinghua University
  • Yuanqi Li Tsinghua University
  • Jian Li Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i04.6156

Abstract

It is known that existing policy gradient methods (such as vanilla policy gradient, PPO, A2C) may suffer from overly large gradients when the current policy is close to deterministic, leading to an unstable training process. We show that such instability can happen even in a very simple environment. To address this issue, we propose a new method, called target distribution learning (TDL), for policy improvement in reinforcement learning. TDL alternates between proposing a target distribution and training the policy network to approach the target distribution. TDL is more effective in constraining the KL divergence between updated policies, and hence leads to more stable policy improvements over iterations. Our experiments show that TDL algorithms perform comparably to (or better than) state-of-the-art algorithms for most continuous control tasks in the MuJoCo environment while being more stable in training.

Downloads

Published

2020-04-03

How to Cite

Zhang, C., Li, Y., & Li, J. (2020). Policy Search by Target Distribution Learning for Continuous Control. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6770-6777. https://doi.org/10.1609/aaai.v34i04.6156

Issue

Section

AAAI Technical Track: Machine Learning