COGS: A Causal Representation Learning Framework for Out-of-Distribution Generalization in Time Series
DOI:
https://doi.org/10.1609/aaai.v40i30.39753Abstract
Time series analysis is crucial in various fields such as healthcare and finance. However, environmental variations and the inherent non-stationarity of time series data often lead to out-of-distribution (OOD) scenarios, consequently causing model performance degradation. Most existing OOD generalization methods primarily focus on images or text, leaving time series analysis relatively underexplored. In this paper, we propose COGS, a novel framework that incorporates causal representation learning into the OOD generalization of time series. By imposing structural priors, our method identifies latent variables and learns a causal graph to disentangle causal variables from non-causal ones. These causal variables are then used to learn domain-invariant representations for stable prediction. Moreover, to tackle the challenge of the absence of domain labels, we further introduce a prototype-based domain discovery algorithm that infers domain labels in an unsupervised manner. The entire framework is optimized in a two-phase iterative manner, resulting in robust OOD performance. Extensive experiments on multiple real-world time series datasets demonstrate that our method achieves competitive performance compared to baseline methods.Downloads
Published
2026-03-14
How to Cite
Song, X., Cheng, Y., Xiao, T., & Suo, J. (2026). COGS: A Causal Representation Learning Framework for Out-of-Distribution Generalization in Time Series. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25572–25580. https://doi.org/10.1609/aaai.v40i30.39753
Issue
Section
AAAI Technical Track on Machine Learning VII