Using Large Language Models in the Companion Cognitive Architecture: A Case Study and Future Prospects

Authors

  • Constantine Nakos Northwestern University
  • Kenneth D. Forbus Northwestern University

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27700

Keywords:

Cognitive Architectures, Natural Language Understanding, Large Language Models

Abstract

The goal of the Companion cognitive architecture is to understand how to create human-like software social organisms. Thus natural language capabilities, both for reading and conversation, are essential. Recently we have begun experimenting with large language models as a component in the Companion architecture. This paper summarizes a case study indicating why we are currently using BERT with our symbolic natural language understanding system. It also describes some additional ways we are contemplating using large language models with Companions.

Downloads

Published

2024-01-22

Issue

Section

Integration of Cognitive Architectures and Generative Models