ARTEM: Enhancing Large Language Model Agents with Spatial-Temporal Episodic Memory
DOI:
https://doi.org/10.1609/aaai.v40i30.39773Abstract
Current large language models (LLMs) exhibit significant deficiencies in episodic memory tasks including encoding, storing, and retrieving specific information from temporally dependent events over a long period of time. Recent approaches to handle memory tasks in LLMs, such as in-context learning, retrieval-augmented generation (RAG), and fine-tuning, may resolve the long-term retention issues, but are still inadequate to handle tasks requiring chronological awareness of the stored information. We introduce Agentic Retrieval with Temporal-Episodic Memory (ARTEM), a hybrid LLM-based agent architecture integrating LLMs with a self-organizing neural network named Spatial-Temporal Episodic Memory (STEM), designed to handle episodic memory tasks. Our approach employs LLMs for event extraction from the inputs to represent temporal, spatial, entitative, and semantic information that may facilitate future retrieval, aside from generating outputs or direct responses. The extracted events can then be encoded vectorially and stored in a fast and stable manner in the episodic memory through an instance-based incremental learning in STEM. STEM supports precise episodes retrieval and helps reduce computational overhead in generating the appropriate responses by LLMs. Evaluation on standardized episodic memory benchmarks across four tasks—partial cue retrieval, epistemic uncertainty detection, recent event identification, and chronological recall—demonstrates superior performance of ARTEM compared to in-context learning, RAG, and fine-tuning in various popular LLMs.Published
2026-03-14
How to Cite
Tan, C. H.-M., Subagdja, B., & Tan, A.-H. (2026). ARTEM: Enhancing Large Language Model Agents with Spatial-Temporal Episodic Memory. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25753–25760. https://doi.org/10.1609/aaai.v40i30.39773
Issue
Section
AAAI Technical Track on Machine Learning VII