From Virtual Demonstration to Real-World Manipulation Using LSTM and MDN

Authors

  • Rouhollah Rahmatizadeh University of Central Florida
  • Pooya Abolghasemi University of Central Florida
  • Aman Behal University of Central Florida
  • Ladislau Bölöni University of Central Florida

DOI:

https://doi.org/10.1609/aaai.v32i1.12099

Keywords:

Robot learning, Learning from demonstration, Deep neural networks

Abstract

Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes.

Downloads

Published

2018-04-26

How to Cite

Rahmatizadeh, R., Abolghasemi, P., Behal, A., & Bölöni, L. (2018). From Virtual Demonstration to Real-World Manipulation Using LSTM and MDN. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12099