Are Expressive Models Truly Necessary for Offline RL?

Authors

  • Guan Wang Tsinghua University
  • Haoyi Niu Tsinghua University
  • Jianxiong Li Tsinghua University
  • Li Jiang McGill University
  • Jianming Hu Tsinghua University
  • Xianyuan Zhan Tsinghua University Shanghai AI Laboratory Beijing Academy of Artificial Intelligence

DOI:

https://doi.org/10.1609/aaai.v39i20.35403

Abstract

Among various branches of offline reinforcement learning (RL) methods, goal-conditioned supervised learning (GCSL) has gained increasing popularity as it formulates the offline RL problem as a sequential modeling task, therefore bypassing the notoriously difficult credit assignment challenge of value learning in conventional RL paradigm. Sequential modeling, however, requires capturing accurate dynamics across long horizons in trajectory data to ensure reasonable policy performance. To meet this requirement, leveraging large, expressive models has become a popular choice in recent literature, which, however, comes at the cost of significantly increased computation and inference latency. Contradictory yet promising, we reveal that lightweight models as simple as shallow 2-layer MLPs, can also enjoy accurate dynamics consistency and significantly reduced sequential modeling errors against large expressive models by adopting a simple recursive planning scheme: recursively planning coarse-grained future sub-goals based on current and target information, and then executes the action with a goal-conditioned policy learned from data relabeled with these sub-goal ground truths. We term our method as Recursive Skip-Step Planning (RSP). Simple yet effective, RSP enjoys great efficiency improvements thanks to its lightweight structure, and substantially outperforms existing methods, reaching new SOTA performances on the D4RL benchmark, especially in multi-stage long-horizon tasks.

Published

2025-04-11

How to Cite

Wang, G., Niu, H., Li, J., Jiang, L., Hu, J., & Zhan, X. (2025). Are Expressive Models Truly Necessary for Offline RL?. Proceedings of the AAAI Conference on Artificial Intelligence, 39(20), 21062–21070. https://doi.org/10.1609/aaai.v39i20.35403

Issue

Section

AAAI Technical Track on Machine Learning VI