Invariant Representation Learning for Memory Behavior Modeling via Adaptive Environment Separation
DOI:
https://doi.org/10.1609/aaai.v40i3.37186Abstract
Memory behavior modeling seeks to predict individual recall performance and understand its underlying cognitive mechanisms. However, the dynamic and heterogeneous nature of memory data poses significant challenges to the generalization ability of models under unseen conditions. To address this challenge, we propose an invariant representation learning framework I-Mem that integrates self-supervised contrastive learning with decorrelation constraints, enabling the adaptive identification and suppression of environment-related factors in sequential behavioral data, thereby mitigating the influence of spurious features and enhancing the modeling of stable cognitive structures. Importantly, the method does not rely on explicit environment partitioning or predefined environment labels, while our theoretical analysis demonstrates that it can effectively resist environmental perturbations and facilitate the extraction of invariant structural representations, thereby ensuring adaptability and generalization. Empirical evaluations on both synthetic and real-world datasets further confirm its superiority over mainstream methods in terms of generalization performance and stable feature identification. Feature attribution analysis reveals that I-Mem extracts invariant features aligned with classical cognitive effects, and reflects short-term behavioral patterns that may indicate latent cognitive mechanisms beyond existing theories, highlighting both interpretability and discovery potential.Published
2026-03-14
How to Cite
Shen, X., Hu, Z., Li, F., Liu, S., & Sun, J. (2026). Invariant Representation Learning for Memory Behavior Modeling via Adaptive Environment Separation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(3), 2047–2055. https://doi.org/10.1609/aaai.v40i3.37186
Issue
Section
AAAI Technical Track on Cognitive Modeling & Cognitive Systems