GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

Authors

  • Rohan Chitnis Massachusetts Institute of Technology
  • Tom Silver Massachusetts Institute of Technology
  • Joshua B. Tenenbaum Massachusetts Institute of Technology
  • Leslie Pack Kaelbling Massachusetts Institute of Technology
  • Tomás Lozano-Pérez Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i13.17400

Keywords:

Planning/Scheduling and Learning, Model-Based Reasoning, Relational Learning, Reinforcement Learning

Abstract

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards. Inspired by human curiosity, we propose goal-literal babbling (GLIB), a simple and general method for exploration in such problems. GLIB samples relational conjunctive goals that can be understood as specific, targeted effects that the agent would like to achieve in the world, and plans to achieve these goals using the transition model being learned. We provide theoretical guarantees showing that exploration with GLIB will converge almost surely to the ground truth model. Experimentally, we find GLIB to strongly outperform existing methods in both prediction and planning on a range of tasks, encompassing standard PDDL and PPDDL planning benchmarks and a robotic manipulation task implemented in the PyBullet physics simulator. Video: https://youtu.be/F6lmrPT6TOY Code: https://git.io/JIsTB

Downloads

Published

2021-05-18

How to Cite

Chitnis, R., Silver, T., Tenenbaum, J. B., Kaelbling, L. P., & Lozano-Pérez, T. (2021). GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11782-11791. https://doi.org/10.1609/aaai.v35i13.17400

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling