Enhancing Evolving Domain Generalization through Dynamic Latent Representations
DOI:
https://doi.org/10.1609/aaai.v38i14.29536Keywords:
ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Representation Learning for Vision, DMKM: Mining of Spatial, Temporal or Spatio-Temporal Data, KRR: Applications, ML: Classification and Regression, ML: Evolutionary Learning, ML: Representation LearningAbstract
Domain generalization is a critical challenge for machine learning systems. Prior domain generalization methods focus on extracting domain-invariant features across several stationary domains to enable generalization to new domains. However, in non-stationary tasks where new domains evolve in an underlying continuous structure, such as time, merely extracting the invariant features is insufficient for generalization to the evolving new domains. Nevertheless, it is non-trivial to learn both evolving and invariant features within a single model due to their conflicts. To bridge this gap, we build causal models to characterize the distribution shifts concerning the two patterns, and propose to learn both dynamic and invariant features via a new framework called Mutual Information-Based Sequential Autoencoders (MISTS). MISTS adopts information theoretic constraints onto sequential autoencoders to disentangle the dynamic and invariant features, and leverage an adaptive classifier to make predictions based on both evolving and invariant information. Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information, and present promising results in evolving domain generalization tasks.Downloads
Published
2024-03-24
How to Cite
Xie, B., Chen, Y., Wang, J., Zhou, K., Han, B., Meng, W., & Cheng, J. (2024). Enhancing Evolving Domain Generalization through Dynamic Latent Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 16040-16048. https://doi.org/10.1609/aaai.v38i14.29536
Issue
Section
AAAI Technical Track on Machine Learning V