State Mamba: Spatiotemporal EEG State-Space Model with Dynamic Brain Alignment for Cross-Subject Representation

Authors

  • Weining Weng Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yang Gu Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yuan Ma Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yuchen Liu Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yingwei Zhang Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yiqiang Chen Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i21.38843

Abstract

Cross-subject EEG decoding remains a fundamental challenge due to substantial inter-subject variability in brain activity, which hinders the development of subject-independent EEG models. Despite progress in extracting cross-subject invariant features, existing studies neglect the shared neural responses that arise under similar cognitive or emotional states across individuals, limiting their ability to learn generalized and consistent EEG representations. To address the challenges, we propose State Mamba, a novel spatiotemporal EEG state-space model that explicitly models and aligns neural responses and their spatiotemporal state transitions to learn consistent and generalizable representations across subjects. Innovatively, State Mamba theoretically formulates a multi-channel Mamba architecture that jointly models spatial and temporal brain state transitions, supporting principled analysis of neural responses. To enhance spatiotemporal feature coupling, we introduce the LGANN module, which adopts global-local attention to integrate long- and short-term brain activity into a compact EEG representation. Furthermore, we design two self-supervised pretext tasks to extract consistent neural patterns across subjects: (1) representation alignment to align EEG representation, and (2) pattern alignment to align their transition rules under identical conditions, jointly promoting subject-invariant EEG representations. Extensive experiments on three benchmark datasets, FACED, DEAP, and ISRUC, demonstrate the superior performance of State Mamba in cross-subject emotion and sleep recognition tasks, validating its robust generalization capability.

Downloads

Published

2026-03-14

How to Cite

Weng, W., Gu, Y., Ma, Y., Liu, Y., Zhang, Y., & Chen, Y. (2026). State Mamba: Spatiotemporal EEG State-Space Model with Dynamic Brain Alignment for Cross-Subject Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(21), 17850-17858. https://doi.org/10.1609/aaai.v40i21.38843

Issue

Section

AAAI Technical Track on Humans and AI