Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning

Authors

  • Yang Yue Tsinghua University SEA AI Lab
  • Bingyi Kang SEA AI Lab
  • Zhongwen Xu SEA AI Lab
  • Gao Huang Tsinghua University
  • Shuicheng Yan SEA AI Lab

DOI:

https://doi.org/10.1609/aaai.v37i9.26311

Keywords:

ML: Reinforcement Learning Algorithms

Abstract

Deep reinforcement learning (RL) algorithms suffer severe performance degradation when the interaction data is scarce, which limits their real-world application. Recently, visual representation learning has been shown to be effective and promising for boosting sample efficiency in RL. These methods usually rely on contrastive learning and data augmentation to train a transition model, which is different from how the model is used in RL---performing value-based planning. Accordingly, the learned representation by these visual methods may be good for recognition but not optimal for estimating state value and solving the decision problem. To address this issue, we propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making. More specifically, VCR trains a model to predict the future state (also referred to as the "imagined state'') based on the current one and a sequence of actions. Instead of aligning this imagined state with a real state returned by the environment, VCR applies a Q value head on both of the states and obtains two distributions of action values. Then a distance is computed and minimized to force the imagined state to produce a similar action value prediction as that by the real state. We develop two implementations of the above idea for the discrete and continuous action spaces respectively. We conduct experiments on Atari 100k and DeepMind Control Suite benchmarks to validate their effectiveness for improving sample efficiency. It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.

Downloads

Published

2023-06-26

How to Cite

Yue, Y., Kang, B., Xu, Z., Huang, G., & Yan, S. (2023). Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11069-11077. https://doi.org/10.1609/aaai.v37i9.26311

Issue

Section

AAAI Technical Track on Machine Learning IV