Do Large Language Models Learn to Human-Like Learn?

Authors

  • Jesse Roberts Vanderbilt University

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31287

Keywords:

Large Language Model, Human-Like Learning, Language Model Behavior, Transformers, GPT

Abstract

Human-like learning refers to the learning done in the lifetime of the individual. However, the architecture of the human brain has been developed over millennia and represents a long process of evolutionary learning which could be viewed as a form of pre-training. Large language models (LLMs), after pre-training on large amounts of data, exhibit a form of learning referred to as in-context learning (ICL). Consistent with human-like learning, LLMs are able to use ICL to perform novel tasks with few examples and to interpret the examples through the lens of their prior experience. I examine the constraints which typify human-like learning and propose that LLMs may learn to exhibit human-like learning simply by training on human generated text.

Downloads

Published

2024-05-20

Issue

Section

Symposium on Human-Like Learning