Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions

Authors

  • Cynthia Matuszek University of Washington
  • Liefeng Bo Amazon
  • Luke Zettlemoyer University of Washington
  • Dieter Fox University of Washington

DOI:

https://doi.org/10.1609/aaai.v28i1.9051

Keywords:

Gesture, Natural Language, Human-Robot Interaction

Abstract

As robots become more ubiquitous, it is increasingly important for untrained users to be able to interact with them intuitively. In this work, we investigate how people refer to objects in the world during relatively unstructured communication with robots. We collect a corpus of deictic interactions from users describing objects, which we use to train language and gesture models that allow our robot to determine what objects are being indicated. We introduce a temporal extension to state-of-the-art hierarchical matching pursuit features to support gesture understanding, and demonstrate that combining multiple communication modalities more effectively captures user intent than relying on a single type of input. Finally, we present initial interactions with a robot that uses the learned models to follow commands while continuing to learn from user input.

Downloads

Published

2014-06-21

How to Cite

Matuszek, C., Bo, L., Zettlemoyer, L., & Fox, D. (2014). Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9051