Latent Domains Modeling for Visual Domain Adaptation

Authors

  • Caiming Xiong State University of New York at Buffalo
  • Scott McCloskey Honeywell ACS
  • Shao-Hang Hsieh State University of New York at Buffalo
  • Jason Corso State University of New York at Buffalo

DOI:

https://doi.org/10.1609/aaai.v28i1.9136

Keywords:

Latent Domain Discovery, Clustering, Domain Adaptation

Abstract

To improve robustness to significant mismatches between source domain and target domain - arising from changes such as illumination, pose and image quality - domain adaptation is increasingly popular in computer vision. But most of methods assume that the source data is from single domain, or that multi-domain datasets provide the domain label for training instances. In practice, most datasets are mixtures of multiple latent domains, and difficult to manually provide the domain label of each data point. In this paper, we propose a model that automatically discovers latent domains in visual datasets. We first assume the visual images are sampled from multiple manifolds, each of which represents different domain, and which are represented by different subspaces. Using the neighborhood structure estimated from images belonging to the same category, we approximate the local linear invariant subspace for each image based on its local structure, eliminating the category-specific elements of the feature. Based on the effectiveness of this representation, we then propose a squared-loss mutual information based clustering model with category distribution prior in each domain to infer the domain assignment for images. In experiment, we test our approach on two common image datasets, the results show that our method outperforms the existing state-of-the-art methods, and also show the superiority of multiple latent domain discovery.

Downloads

Published

2014-06-21

How to Cite

Xiong, C., McCloskey, S., Hsieh, S.-H., & Corso, J. (2014). Latent Domains Modeling for Visual Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9136