Overcoming Concept Shift in Domain-Aware Settings through Consolidated Internal Distributions

Authors

  • Mohammad Rostami University of Southern California
  • Aram Galstyan USC Information Sciences Institute

DOI:

https://doi.org/10.1609/aaai.v37i8.26151

Keywords:

ML: Lifelong and Continual Learning, ML: Representation Learning, ML: Unsupervised & Self-Supervised Learning

Abstract

We develop an algorithm to improve the predictive performance of a pre-trained model under \textit{concept shift} without retraining the model from scratch when only unannotated samples of initial concepts are accessible. We model this problem as a domain adaptation problem, where the source domain data is inaccessible during model adaptation. The core idea is based on consolidating the intermediate internal distribution, learned to represent the source domain data, after adapting the model. We provide theoretical analysis and conduct extensive experiments on five benchmark datasets to demonstrate that the proposed method is effective.

Downloads

Published

2023-06-26

How to Cite

Rostami, M., & Galstyan, A. (2023). Overcoming Concept Shift in Domain-Aware Settings through Consolidated Internal Distributions. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9623-9631. https://doi.org/10.1609/aaai.v37i8.26151

Issue

Section

AAAI Technical Track on Machine Learning III