Guiding Robot Exploration in Reinforcement Learning via Automated Planning
DOI:
https://doi.org/10.1609/icaps.v31i1.16011Keywords:
Learning Methods For Robot Planning, Planning With Uncertainty In Robotics, Real-world Robotic Planning ApplicationsAbstract
Reinforcement learning (RL) enables an agent to learn from trial-and-error experiences toward achieving long-term goals; automated planning aims to compute plans for accomplishing tasks using action knowledge. Despite their shared goal of completing complex tasks, the development of RL and automated planning has been largely isolated due to their different computational modalities. Focusing on improving RL agents' learning efficiency, we develop Guided Dyna-Q (GDQ) to enable RL agents to reason with action knowledge to avoid exploring less-relevant states. The action knowledge is used for generating artificial experiences from an optimistic simulation. GDQ has been evaluated in simulation and using a mobile robot conducting navigation tasks in a multi-room office environment. Compared with competitive baselines, GDQ significantly reduces the effort in exploration while improving the quality of learned policies.Downloads
Published
2021-05-17
How to Cite
Hayamizu, Y., Amiri, S., Chandan, K., Takadama, K., & Zhang, S. (2021). Guiding Robot Exploration in Reinforcement Learning via Automated Planning. Proceedings of the International Conference on Automated Planning and Scheduling, 31(1), 625-633. https://doi.org/10.1609/icaps.v31i1.16011
Issue
Section
Special Track on Robotics