Using Large Language Models in the Companion Cognitive Architecture: A Case Study and Future Prospects
DOI:
https://doi.org/10.1609/aaaiss.v2i1.27700Keywords:
Cognitive Architectures, Natural Language Understanding, Large Language ModelsAbstract
The goal of the Companion cognitive architecture is to understand how to create human-like software social organisms. Thus natural language capabilities, both for reading and conversation, are essential. Recently we have begun experimenting with large language models as a component in the Companion architecture. This paper summarizes a case study indicating why we are currently using BERT with our symbolic natural language understanding system. It also describes some additional ways we are contemplating using large language models with Companions.Downloads
Published
2024-01-22
How to Cite
Nakos, C., & Forbus, K. D. (2024). Using Large Language Models in the Companion Cognitive Architecture: A Case Study and Future Prospects. Proceedings of the AAAI Symposium Series, 2(1), 356-359. https://doi.org/10.1609/aaaiss.v2i1.27700
Issue
Section
Integration of Cognitive Architectures and Generative Models