Using Large Language Models in the Companion Cognitive Architecture: A Case Study and Future Prospects
Keywords:Cognitive Architectures, Natural Language Understanding, Large Language Models
AbstractThe goal of the Companion cognitive architecture is to understand how to create human-like software social organisms. Thus natural language capabilities, both for reading and conversation, are essential. Recently we have begun experimenting with large language models as a component in the Companion architecture. This paper summarizes a case study indicating why we are currently using BERT with our symbolic natural language understanding system. It also describes some additional ways we are contemplating using large language models with Companions.
Integration of Cognitive Architectures and Generative Models