Sample-Efficient Reinforcement Learning via Conservative Model-Based Actor-Critic

Authors

  • Zhihai Wang University of Science and Technology of China
  • Jie Wang University of Science and Technology of China Hefei Comprehensive National Science Center
  • Qi Zhou University of Science and Technology of China
  • Bin Li University of Science and Technology of China
  • Houqiang Li University of Science and Technology of China Hefei Comprehensive National Science Center

DOI:

https://doi.org/10.1609/aaai.v36i8.20839

Keywords:

Machine Learning (ML)

Abstract

Model-based reinforcement learning algorithms, which aim to learn a model of the environment to make decisions, are more sample efficient than their model-free counterparts. The sample efficiency of model-based approaches relies on whether the model can well approximate the environment. However, learning an accurate model is challenging, especially in complex and noisy environments. To tackle this problem, we propose the conservative model-based actor-critic (CMBAC), a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models. Specifically, CMBAC learns multiple estimates of the Q-value function from a set of inaccurate models and uses the average of the bottom-k estimates---a conservative estimate---to optimize the policy. An appealing feature of CMBAC is that the conservative estimates effectively encourage the agent to avoid unreliable “promising actions”---whose values are high in only a small fraction of the models. Experiments demonstrate that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging control tasks, and the proposed method is more robust than previous methods in noisy environments.

Downloads

Published

2022-06-28

How to Cite

Wang, Z., Wang, J., Zhou, Q., Li, B., & Li, H. (2022). Sample-Efficient Reinforcement Learning via Conservative Model-Based Actor-Critic. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8612-8620. https://doi.org/10.1609/aaai.v36i8.20839

Issue

Section

AAAI Technical Track on Machine Learning III