Bootstrapping Cognitive Agents with a Large Language Model
DOI:
https://doi.org/10.1609/aaai.v38i1.27822Keywords:
CMS: Agent Architectures, CMS: (Computational) Cognitive Architectures, ROB: Cognitive RoboticsAbstract
Large language models contain noisy general knowledge of the world, yet are hard to train or fine-tune. In contrast cognitive architectures have excellent interpretability and are flexible to update but require a lot of manual work to instantiate. In this work, we combine the best of both worlds: bootstrapping a cognitive-based model with the noisy knowledge encoded in large language models. Through an embodied agent doing kitchen tasks, we show that our proposed framework yields better efficiency compared to an agent entirely based on large language models. Our experiments also indicate that the cognitive agent bootstrapped using this framework can generalize to novel environments and be scaled to complex tasks.Downloads
Published
2024-03-25
How to Cite
Zhu, F., & Simmons, R. (2024). Bootstrapping Cognitive Agents with a Large Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 655-663. https://doi.org/10.1609/aaai.v38i1.27822
Issue
Section
AAAI Technical Track on Cognitive Modeling & Cognitive Systems