Foresee then Evaluate: Decomposing Value Estimation with Latent Future Prediction

Authors

  • Hongyao Tang College of Intelligence of Computing, Tianjin University
  • Zhaopeng Meng College of Intelligence of Computing, Tianjin University
  • Guangyong Chen Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
  • Pengfei Chen The Chinese University of Hong Kong
  • Chen Chen Huawei Noah’s Ark Lab
  • Yaodong Yang Huawei Noah's Ark Lab
  • Luo Zhang Tianjin University
  • Wulong Liu Huawei Noah's Ark Lab
  • Jianye Hao College of Intelligence of Computing, Tianjin University Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v35i11.17182

Keywords:

Reinforcement Learning

Abstract

Value function is the central notion of Reinforcement Learning (RL). Value estimation, especially with function approximation, can be challenging since it involves the stochasticity of environmental dynamics and reward signals that can be sparse and delayed in some cases. A typical model-free RL algorithm usually estimates the values of a policy by Temporal Difference (TD) or Monte Carlo (MC) algorithms directly from rewards, without explicitly taking dynamics into consideration. In this paper, we propose Value Decomposition with Future Prediction (VDFP), providing an explicit two-step understanding of the value estimation process: 1) first foresee the latent future, 2) and then evaluate it. We analytically decompose the value function into a latent future dynamics part and a policy-independent trajectory return part, inducing a way to model latent dynamics and returns separately in value estimation. Further, we derive a practical deep RL algorithm, consisting of a convolutional model to learn compact trajectory representation from past experiences, a conditional variational auto-encoder to predict the latent future dynamics and a convex return model that evaluates trajectory representation. In experiments, we empirically demonstrate the effectiveness of our approach for both off-policy and on-policy RL in several OpenAI Gym continuous control tasks as well as a few challenging variants with delayed reward.

Downloads

Published

2021-05-18

How to Cite

Tang, H., Meng, Z., Chen, G., Chen, P., Chen, C., Yang, Y., Zhang, L., Liu, W., & Hao, J. (2021). Foresee then Evaluate: Decomposing Value Estimation with Latent Future Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9834-9842. https://doi.org/10.1609/aaai.v35i11.17182

Issue

Section

AAAI Technical Track on Machine Learning IV