Episodic Return Decomposition by Difference of Implicitly Assigned Sub-trajectory Reward

Authors

  • Haoxin Lin National Key Laboratory for Novel Software Technology, Nanjing University School of Artificial Intelligence, Nanjing University Polixir Technologies
  • Hongqiu Wu National Key Laboratory for Novel Software Technology, Nanjing University School of Artificial Intelligence, Nanjing University
  • Jiaji Zhang National Key Laboratory for Novel Software Technology, Nanjing University School of Artificial Intelligence, Nanjing University
  • Yihao Sun National Key Laboratory for Novel Software Technology, Nanjing University School of Artificial Intelligence, Nanjing University
  • Junyin Ye National Key Laboratory for Novel Software Technology, Nanjing University School of Artificial Intelligence, Nanjing University Polixir Technologies
  • Yang Yu National Key Laboratory for Novel Software Technology, Nanjing University School of Artificial Intelligence, Nanjing University Polixir Technologies Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i12.29287

Keywords:

ML: Reinforcement Learning

Abstract

Real-world decision-making problems are usually accompanied by delayed rewards, which affects the sample efficiency of Reinforcement Learning, especially in the extremely delayed case where the only feedback is the episodic reward obtained at the end of an episode. Episodic return decomposition is a promising way to deal with the episodic-reward setting. Several corresponding algorithms have shown remarkable effectiveness of the learned step-wise proxy rewards from return decomposition. However, these existing methods lack either attribution or representation capacity, leading to inefficient decomposition in the case of long-term episodes. In this paper, we propose a novel episodic return decomposition method called Diaster (Difference of implicitly assigned sub-trajectory reward). Diaster decomposes any episodic reward into credits of two divided sub-trajectories at any cut point, and the step-wise proxy rewards come from differences in expectation. We theoretically and empirically verify that the decomposed proxy reward function can guide the policy to be nearly optimal. Experimental results show that our method outperforms previous state-of-the-art methods in terms of both sample efficiency and performance. The code is available at https://github.com/HxLyn3/Diaster.

Published

2024-03-24

How to Cite

Lin, H., Wu, H., Zhang, J., Sun, Y., Ye, J., & Yu, Y. (2024). Episodic Return Decomposition by Difference of Implicitly Assigned Sub-trajectory Reward. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13808–13816. https://doi.org/10.1609/aaai.v38i12.29287

Issue

Section

AAAI Technical Track on Machine Learning III