Comparing LLMs for Prompt-Enhanced ACT-R and Soar Model Development: A Case Study in Cognitive Simulation

Authors

  • Siyu Wu College of Information Sciences and Technology, the Pennsylvania State University, University Park
  • Rodrigo F. Souza Federal University of Sao Paulo
  • Frank E. Ritter College of Information Sciences and Technology, the Pennsylvania State University, University Park
  • Walter T. Lima Jr Federal University of Sao Paulo

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27710

Keywords:

Cognition Computational Modeling, ACT-R, Soar, LLMs

Abstract

This paper presents experiments on using ChatGPT4 and Google Bard to create ACT-R and Soar models. The study involves two simulated cognitive tasks, where ChatGPT4 and Google Bard (Large Language Models, LLMs) serve as conversational interfaces within the ACT-R and Soar framework development environments. The first task involves creating an intelligent driving model using ACT-R with motor and perceptual behavior and can further interact with an unmodified interface. The second task evaluates the development of educational skills using Soar. Prompts were designed to represent cognitive operations and actions, including providing context, asking perception-related questions, decision-making scenarios, and evaluating the system's responses, and they were iteratively refined based on model behavior evaluation. Results demonstrate the potential of using LLMs to serve as interactive interfaces to develop ACT-R and Soar models within a human-in-the-loop model development process. We documented the mistakes LLMs made during this integration and provided corresponding resolutions when adopting this modeling approach. Furthermore, we presented a framework of prompt patterns that maximizes LLMs interaction for artificial cognitive architectures.

Downloads

Published

2024-01-22

Issue

Section

Integration of Cognitive Architectures and Generative Models