TY - JOUR AU - Alomari, Muhannad AU - Duckworth, Paul AU - Hogg, David AU - Cohn, Anthony PY - 2017/02/12 Y2 - 2024/03/29 TI - Natural Language Acquisition and Grounding for Embodied Robotic Systems JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 31 IS - 1 SE - Special Track on Cognitive Systems DO - 10.1609/aaai.v31i1.11161 UR - https://ojs.aaai.org/index.php/AAAI/article/view/11161 SP - AB - <p> We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects. </p> ER -