Policy Optimization with Model-Based Explorations

Authors

  • Feiyang Pan Chinese Academy of Sciences
  • Qingpeng Cai Tsinghua University
  • An-Xiang Zeng Alibaba
  • Chun-Xiang Pan Alibaba Group
  • Qing Da Alibaba Group
  • Hualin He Alibaba Group
  • Qing He Chinese Academy of Sciences
  • Pingzhong Tang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014675

Abstract

Model-free reinforcement learning methods such as the Proximal Policy Optimization algorithm (PPO) have successfully applied in complex decision-making problems such as Atari games. However, these methods suffer from high variances and high sample complexity. On the other hand, model-based reinforcement learning methods that learn the transition dynamics are more sample efficient, but they often suffer from the bias of the transition estimation. How to make use of both model-based and model-free learning is a central problem in reinforcement learning.

In this paper, we present a new technique to address the tradeoff between exploration and exploitation, which regards the difference between model-free and model-based estimations as a measure of exploration value. We apply this new technique to the PPO algorithm and arrive at a new policy optimization method, named Policy Optimization with Modelbased Explorations (POME). POME uses two components to predict the actions’ target values: a model-free one estimated by Monte-Carlo sampling and a model-based one which learns a transition model and predicts the value of the next state. POME adds the error of these two target estimations as the additional exploration value for each state-action pair, i.e, encourages the algorithm to explore the states with larger target errors which are hard to estimate. We compare POME with PPO on Atari 2600 games, and it shows that POME outperforms PPO on 33 games out of 49 games.

Downloads

Published

2019-07-17

How to Cite

Pan, F., Cai, Q., Zeng, A.-X., Pan, C.-X., Da, Q., He, H., He, Q., & Tang, P. (2019). Policy Optimization with Model-Based Explorations. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4675-4682. https://doi.org/10.1609/aaai.v33i01.33014675

Issue

Section

AAAI Technical Track: Machine Learning