Exploiting Language Models as a Source of Knowledge for Cognitive Agents

Authors

  • James R. Kirk Center for Integrated Cognition
  • Robert E. Wray Center for Integrated Cognition
  • John E. Laird Center for Integrated Cognition

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27690

Keywords:

Large Language Models, Cognitive Architecture, Task Learning, Knowledge Extraction

Abstract

Large language models (LLMs) provide capabilities far beyond sentence completion, including question answering, summarization, and natural-language inference. While many of these capabilities have potential application to cognitive systems, our research is exploiting language models as a source of task knowledge for cognitive agents, that is, agents realized via a cognitive architecture. We identify challenges and opportunities for using language models as an external knowledge source for cognitive systems and possible ways to improve the effectiveness of knowledge extraction by integrating extraction with cognitive architecture capabilities, highlighting with examples from our recent work in this area.

Downloads

Published

2024-01-22

Issue

Section

Integration of Cognitive Architectures and Generative Models