Hierarchical Policy Search via Return-Weighted Density Estimation

Authors

  • Takayuki Osa University of Tokyo / RIKEN
  • Masashi Sugiyama RIKEN / University of Tokyo

DOI:

https://doi.org/10.1609/aaai.v32i1.11706

Keywords:

Reinforcement learning

Abstract

Learning an optimal policy from a multi-modal reward function is a challenging problem in reinforcement learning (RL). Hierarchical RL (HRL) tackles this problem by learning a hierarchicalpolicy, where multiple option policies are in charge of different strategies corresponding to modes of a reward function and a gating policy selects the best option for a given context. Although HRL has been demonstrated to be promising, current state-of-the-art methods cannot still perform well in complex real-world problems due to the difficulty of identifying modes of the reward function. In this paper, we propose a novel method called hierarchical policy search via return-weighted density estimation (HPSDE), which can efficiently identify the modes through density estimation with return-weighted importance sampling. Our proposed method finds option policies corresponding to the modes of the return function and automatically determines the number and the location of option policies, which significantly reduces the burden of hyper-parameters tuning. Through experiments, we demonstrate that the proposed HPSDE successfully learns option policies corresponding to modes of the return function and that it can be successfully applied to a motion planning problem of a redundant robotic manipulator.

Downloads

Published

2018-04-29

How to Cite

Osa, T., & Sugiyama, M. (2018). Hierarchical Policy Search via Return-Weighted Density Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11706