Guiding Robot Exploration in Reinforcement Learning via Automated Planning

Authors

  • Yohei Hayamizu The University of Electro-Communications
  • Saeid Amiri The State University of New York at Binghamton
  • Kishan Chandan The State University of New York at Binghamton
  • Keiki Takadama The University of Electro-Communications
  • Shiqi Zhang The State University of New York at Binghamton

Keywords:

Learning Methods For Robot Planning, Planning With Uncertainty In Robotics, Real-world Robotic Planning Applications

Abstract

Reinforcement learning (RL) enables an agent to learn from trial-and-error experiences toward achieving long-term goals; automated planning aims to compute plans for accomplishing tasks using action knowledge. Despite their shared goal of completing complex tasks, the development of RL and automated planning has been largely isolated due to their different computational modalities. Focusing on improving RL agents' learning efficiency, we develop Guided Dyna-Q (GDQ) to enable RL agents to reason with action knowledge to avoid exploring less-relevant states. The action knowledge is used for generating artificial experiences from an optimistic simulation. GDQ has been evaluated in simulation and using a mobile robot conducting navigation tasks in a multi-room office environment. Compared with competitive baselines, GDQ significantly reduces the effort in exploration while improving the quality of learned policies.

Downloads

Published

2021-05-17

How to Cite

Hayamizu, Y., Amiri, S., Chandan, K., Takadama, K., & Zhang, S. (2021). Guiding Robot Exploration in Reinforcement Learning via Automated Planning. Proceedings of the International Conference on Automated Planning and Scheduling, 31(1), 625-633. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/16011