Multi-Source Domain Adaptation for Visual Sentiment Classification

Authors

  • Chuang Lin National University of Singapore
  • Sicheng Zhao University of California, Berkeley
  • Lei Meng National University of Singapore
  • Tat-Seng Chua National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v34i03.5651

Abstract

Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.

Downloads

Published

2020-04-03

How to Cite

Lin, C., Zhao, S., Meng, L., & Chua, T.-S. (2020). Multi-Source Domain Adaptation for Visual Sentiment Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), 2661-2668. https://doi.org/10.1609/aaai.v34i03.5651

Issue

Section

AAAI Technical Track: Humans and AI