Flow to Control: Offline Reinforcement Learning with Lossless Primitive Discovery


  • Yiqin Yang Tsinghua University
  • Hao Hu Tsinghua University
  • Wenzhe Li Tsinghua University
  • Siyuan Li Harbin Institute of Technology
  • Jun Yang Tsinghua University
  • Qianchuan Zhao Tsinghua University
  • Chongjie Zhang Tsinghua University




ML: Reinforcement Learning Algorithms, ML: Reinforcement Learning Theory


Offline reinforcement learning (RL) enables the agent to effectively learn from logged data, which significantly extends the applicability of RL algorithms in real-world scenarios where exploration can be expensive or unsafe. Previous works have shown that extracting primitive skills from the recurring and temporally extended structures in the logged data yields better learning. However, these methods suffer greatly when the primitives have limited representation ability to recover the original policy space, especially in offline settings. In this paper, we give a quantitative characterization of the performance of offline hierarchical learning and highlight the importance of learning lossless primitives. To this end, we propose to use a flow-based structure as the representation for low-level policies. This allows us to represent the behaviors in the dataset faithfully while keeping the expression ability to recover the whole policy space. We show that such lossless primitives can drastically improve the performance of hierarchical policies. The experimental results and extensive ablation studies on the standard D4RL benchmark show that our method has a good representation ability for policies and achieves superior performance in most tasks.




How to Cite

Yang, Y., Hu, H., Li, W., Li, S., Yang, J., Zhao, Q., & Zhang, C. (2023). Flow to Control: Offline Reinforcement Learning with Lossless Primitive Discovery. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10843-10851. https://doi.org/10.1609/aaai.v37i9.26286



AAAI Technical Track on Machine Learning IV