A Proposal for a Language Model Based Cognitive Architecture

Authors

  • Kobe Knowles University of Auckland
  • Michael Witbrock University of Auckland
  • Gillian Dobbie University of Auckland
  • Vithya Yogarajan University of Auckland

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27691

Keywords:

Cognitive Architectures, Language Models, Transformers, Fast And Slow Thinking

Abstract

Large Language Models (LLMs) have shown impressive performance on a wide variety of tasks. However, apparent limitations hinder their performance, especially on tasks that require multiple steps of reasoning or compositionality. Arguably, the primary sources of these limitations are the decoding strategy and how the models are trained. We propose, and provide a general description of, an architecture that combines LLMs and cognitive architectures, called Language Model based Cognitive Architecture (LMCA), to overcome these limitations. We draw an analogy between this architecture and "fast" and "slow" thinking in human cognition.

Downloads

Published

2024-01-22

Issue

Section

Integration of Cognitive Architectures and Generative Models