Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis


  • Oscar J. Romero Carnegie Mellon University
  • John Zimmerman Carnegie Mellon University
  • Aaron Steinfeld Carnegie Mellon University
  • Anthony Tomasic Carnegie Mellon University



Large Language Models, Cognitive AI, Common Model Of Cognition, Neuro-symbolic Systems, Multi-agent Systems, ACT-R Cognitive Architecture, CLARION Cognitive Architecture, Generative AI, Simulation Theory Of Cognition


This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.






Integration of Cognitive Architectures and Generative Models