Natural Language Acquisition and Grounding for Embodied Robotic Systems

Authors

  • Muhannad Alomari University of Leeds
  • Paul Duckworth University of Leeds
  • David Hogg University of Leeds
  • Anthony Cohn University of Leeds

DOI:

https://doi.org/10.1609/aaai.v31i1.11161

Keywords:

cognitive robotics, language and vision, bootstrap problem

Abstract

We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.

Downloads

Published

2017-02-12

How to Cite

Alomari, M., Duckworth, P., Hogg, D., & Cohn, A. (2017). Natural Language Acquisition and Grounding for Embodied Robotic Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11161