Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30466Keywords:
Reinforcement Learning, Policy Gradient, GeneralizationAbstract
Current policy gradient techniques excel in refining policies over sampled states but falter when generalizing to unseen states. To address this, we introduce Reinforcement Sampling (RS), a novel method leveraging a generalizable action value function to sample improved decisions. RS is able to improve the decision quality whenever the action value estimation is accurate. It works by improving the agent's decision on the fly on the states the agent is visiting. Compared with the historically experienced states in which conventional policy gradient methods improve the policy, the currently visited states are more relevant to the agent. Our method sufficiently exploits the generalizability of the value function on unseen states and sheds new light on the future development of generalizable reinforcement learning.Downloads
Published
2024-03-24
How to Cite
Kong, R., Wu, C., & Zhang, Z. (2024). Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23546–23547. https://doi.org/10.1609/aaai.v38i21.30466
Issue
Section
AAAI Student Abstract and Poster Program