Abstraction-Guided Policy Recovery from Expert Demonstrations
DOI:
https://doi.org/10.1609/icaps.v31i1.16004Keywords:
Reinforcement Learning Using Planning (model-based, Bayesian, Deep, Etc.), Applications That Involve A Combination Of Learning With Planning Or SchedulingAbstract
Behavior cloning is a method of automated decision-making that aims to extract meaningful information from expert demonstrations and reproduce the same behavior autonomously. It is unlikely that demonstrations will exhaustively cover the potential problem space, compromising the quality of automation when out-of-distribution states are encountered. Our approach RECO jointly learns both an imitation policy and recovery policy from expert data. The recovery policy steers the agent from unknown states back to the demonstrated states in the data set. While there is, per definition, no data available to learn the recovery policy, we exploit abstractions to generalize beyond the available data and simulate the recovery problem. When the most appropriate abstraction for the given data is unknown, our method selects the best recovery policy from a set generated by several candidate abstractions. In tabular domains, where we assume an agent must call to a human supervisor for help if it is in an unknown state, we show how RECO results in drastically fewer calls without compromising solution quality and with relatively few trajectories provided by an expert. We also introduce a continuous adaptation of our method and demonstrate the ability of RECO to recover an agent from states where its supervised learning-based imitation policy would otherwise fail.Downloads
Published
2021-05-17
How to Cite
Ponnambalam, C. T., Oliehoek, F. A., & Spaan, M. . T. J. (2021). Abstraction-Guided Policy Recovery from Expert Demonstrations. Proceedings of the International Conference on Automated Planning and Scheduling, 31(1), 560-568. https://doi.org/10.1609/icaps.v31i1.16004
Issue
Section
Special Track on Planning and Learning