Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract)

Authors

  • Rui Kong National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China
  • Chenyang Wu National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China
  • Zongzhang Zhang National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China

DOI:

https://doi.org/10.1609/aaai.v38i21.30466

Keywords:

Reinforcement Learning, Policy Gradient, Generalization

Abstract

Current policy gradient techniques excel in refining policies over sampled states but falter when generalizing to unseen states. To address this, we introduce Reinforcement Sampling (RS), a novel method leveraging a generalizable action value function to sample improved decisions. RS is able to improve the decision quality whenever the action value estimation is accurate. It works by improving the agent's decision on the fly on the states the agent is visiting. Compared with the historically experienced states in which conventional policy gradient methods improve the policy, the currently visited states are more relevant to the agent. Our method sufficiently exploits the generalizability of the value function on unseen states and sheds new light on the future development of generalizable reinforcement learning.

Published

2024-03-24

How to Cite

Kong, R., Wu, C., & Zhang, Z. (2024). Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23546–23547. https://doi.org/10.1609/aaai.v38i21.30466