Critic-Guided Decision Transformer for Offline Reinforcement Learning

Authors

  • Yuanfu Wang Shanghai Jiao Tong University Shanghai Artificial Intelligence Laboratory
  • Chao Yang Shanghai Artificial Intelligence Laboratory
  • Ying Wen Shanghai Jiao Tong University
  • Yu Liu Shanghai Artificial Intelligence Laboratory SenseTime Group LTD
  • Yu Qiao Shanghai Artificial Intelligence Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i14.29499

Keywords:

ML: Reinforcement Learning, ROB: Behavior Learning & Control

Abstract

Recent advancements in offline reinforcement learning (RL) have underscored the capabilities of Return-Conditioned Supervised Learning (RCSL), a paradigm that learns the action distribution based on target returns for each state in a supervised manner. However, prevailing RCSL methods largely focus on deterministic trajectory modeling, disregarding stochastic state transitions and the diversity of future trajectory distributions. A fundamental challenge arises from the inconsistency between the sampled returns within individual trajectories and the expected returns across multiple trajectories. Fortunately, value-based methods offer a solution by leveraging a value function to approximate the expected returns, thereby addressing the inconsistency effectively. Building upon these insights, we propose a novel approach, termed the Critic-Guided Decision Transformer (CGDT), which combines the predictability of long-term returns from value-based methods with the trajectory modeling capability of the Decision Transformer. By incorporating a learned value function, known as the critic, CGDT ensures a direct alignment between the specified target returns and the expected returns of actions. This integration bridges the gap between the deterministic nature of RCSL and the probabilistic characteristics of value-based methods. Empirical evaluations on stochastic environments and D4RL benchmark datasets demonstrate the superiority of CGDT over traditional RCSL methods. These results highlight the potential of CGDT to advance the state of the art in offline RL and extend the applicability of RCSL to a wide range of RL tasks.

Published

2024-03-24

How to Cite

Wang, Y., Yang, C., Wen, Y., Liu, Y., & Qiao, Y. (2024). Critic-Guided Decision Transformer for Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15706-15714. https://doi.org/10.1609/aaai.v38i14.29499

Issue

Section

AAAI Technical Track on Machine Learning V