Learning to Reweight Imaginary Transitions for Model-Based Reinforcement Learning

Authors

  • Wenzhen Huang School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China CRISE, Institute of Automation, Chinese Academy of Sciences, Beijing, China
  • Qiyue Yin School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China CRISE, Institute of Automation, Chinese Academy of Sciences, Beijing, China
  • Junge Zhang School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China CRISE, Institute of Automation, Chinese Academy of Sciences, Beijing, China
  • Kaiqi Huang School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China CRISE, Institute of Automation, Chinese Academy of Sciences, Beijing, China CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v35i9.16958

Keywords:

Reinforcement Learning, Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

Model-based reinforcement learning (RL) is more sample efficient than model-free RL by using imaginary trajectories generated by the learned dynamics model. When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions. To alleviate such problem, this paper proposes to adaptively reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories. More specifically, we evaluate the effect of an imaginary transition by calculating the change of the loss computed on the real samples when we use the transition to train the action-value and policy functions. Based on this evaluation criterion, we construct the idea of reweighting each imaginary transition by a well-designed meta-gradient algorithm. Extensive experimental results demonstrate that our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks. Visualization of our changing weights further validates the necessity of utilizing reweight scheme.

Downloads

Published

2021-05-18

How to Cite

Huang, W., Yin, Q., Zhang, J., & Huang, K. (2021). Learning to Reweight Imaginary Transitions for Model-Based Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7848-7856. https://doi.org/10.1609/aaai.v35i9.16958

Issue

Section

AAAI Technical Track on Machine Learning II