Deictic Image Mapping: An Abstraction for Learning Pose Invariant Manipulation Policies

Authors

  • Robert Platt Northeastern University
  • Colin Kohler Northeastern University
  • Marcus Gualtieri Northeastern University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018042

Abstract

In applications of deep reinforcement learning to robotics, it is often the case that we want to learn pose invariant policies: policies that are invariant to changes in the position and orientation of objects in the world. For example, consider a pegin-hole insertion task. If the agent learns to insert a peg into one hole, we would like that policy to generalize to holes presented in different poses. Unfortunately, this is a challenge using conventional methods. This paper proposes a novel state and action abstraction that is invariant to pose shifts called deictic image maps that can be used with deep reinforcement learning. We provide broad conditions under which optimal abstract policies are optimal for the underlying system. Finally, we show that the method can help solve challenging robotic manipulation problems.

Downloads

Published

2019-07-17

How to Cite

Platt, R., Kohler, C., & Gualtieri, M. (2019). Deictic Image Mapping: An Abstraction for Learning Pose Invariant Manipulation Policies. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8042-8049. https://doi.org/10.1609/aaai.v33i01.33018042

Issue

Section

AAAI Technical Track: Robotics