Modeling Human-Like Acquisition of Language and Concepts

Authors

  • Peter Lindes IQMRI Center for Integrated Cognition
  • Steven Jones IQMRI Center for Integrated Cognition

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31275

Keywords:

Human-like Learning, Language Acquisition, Cognitive Modeling

Abstract

Humans acquire language and related concepts in a trajectory over a lifetime. Concepts for simple interaction with the world are learned before language. Later, words are learned to name these concepts along with structures needed to represent larger meanings. Eventually, language advances to where it can drive the learning of new concepts. Throughout this trajectory a language processing capability uses architectural mechanisms to process language using the knowledge already acquired. We assume that this growing body of knowledge is made up of small units of form-meaning mapping that can be composed in many ways, suggesting that these units are learned incrementally from experience. In prior work we have built a system to comprehend human language within an autonomous robot using knowledge in such units developed by hand. Here we propose a research program to develop the ability of an artificial agent to acquire this knowledge incrementally and autonomously from its experience in a similar trajectory. We then propose a strategy for evaluating this human-like learning system using a large benchmark created as a tool for training deep learning systems. We expect that our human-like learning system will produce better task performance from training on only a small subset of this benchmark.

Downloads

Published

2024-05-20

Issue

Section

Symposium on Human-Like Learning