Learning to Manipulate Unknown Objects in Clutter by Reinforcement

Authors

  • Abdeslam Boularias Carnegie Mellon University
  • James Bagnell Carnegie Mellon University
  • Anthony Stentz Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v29i1.9378

Keywords:

Robotics, Reinforcement Learning, Grasping, Manipulation

Abstract

We present a fully autonomous robotic system for grasping objects in dense clutter. The objects are unknown and have arbitrary shapes. Therefore, we cannot rely on prior models. Instead, the robot learns online, from scratch, to manipulate the objects by trial and error. Grasping objects in clutter is significantly harder than grasping isolated objects, because the robot needs to push and move objects around in order to create sufficient space for the fingers. These pre-grasping actions do not have an immediate utility, and may result in unnecessary delays. The utility of a pre-grasping action can be measured only by looking at the complete chain of consecutive actions and effects. This is a sequential decision-making problem that can be cast in the reinforcement learning framework. We solve this problem by learning the stochastic transitions between the observed states, using nonparametric density estimation. The learned transition function is used only for re-calculating the values of the executed actions in the observed states, with different policies. Values of new state-actions are obtained by regressing the values of the executed actions. The state of the system at a given time is a depth (3D) image of the scene. We use spectral clustering for detecting the different objects in the image. The performance of our system is assessed on a robot with real-world objects.

Downloads

Published

2015-02-16

How to Cite

Boularias, A., Bagnell, J., & Stentz, A. (2015). Learning to Manipulate Unknown Objects in Clutter by Reinforcement. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9378