Complex Task Learning from Unstructured Demonstrations

Authors

  • Scott Niekum University of Massachusetts Amherst

DOI:

https://doi.org/10.1609/aaai.v26i1.8182

Abstract

Much work in learning from demonstration has focused on learning simple tasks from structured demonstrations that have a well-defined beginning and end. As we attempt to scale robot learning to increasingly complex tasks, it becomes intractable to learn task policies monolithically. Furthermore, it is desirable to be able to learn from natural, unstructured demonstrations, which are unsegmented, possibly incomplete, and may come from different tasks. We propose a three-part approach to designing a natural, scalable system that allows a robot to learn tasks of increasing complexity by automatically building and refining a library of skills over time. First, we describe a Bayesian nonparametric model that can segment unstructured demonstrations into appropriate numbers of component skills and recognize repeated skills across demonstrations and tasks. These skills can then be parameterized and generalized to new situations. Second, we propose to create a system that allows the user to provide unstructured corrections and feedback to the robot, without requiring any knowledge of the robot's underlying representation of the task or its component skills. Third, we propose to infer the user's intentions for each segmented skill and autonomously improve these skills using reinforcement learning. This approach will be applied to learn and generalize complex, multi-step tasks that are beyond the reach of current LfD methods, using the PR2 mobile manipulator as a testing platform.

Downloads

Published

2021-09-20

How to Cite

Niekum, S. (2021). Complex Task Learning from Unstructured Demonstrations. Proceedings of the AAAI Conference on Artificial Intelligence, 26(1), 2402-2403. https://doi.org/10.1609/aaai.v26i1.8182