Deep Domain-Adversarial Image Generation for Domain Generalisation

Authors

  • Kaiyang Zhou University of Surrey
  • Yongxin Yang University of Surrey
  • Timothy Hospedales University of Edinburgh
  • Tao Xiang University of Surrey

DOI:

https://doi.org/10.1609/aaai.v34i07.7003

Abstract

Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on Deep Domain-Adversarial Image Generation (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.

Downloads

Published

2020-04-03

How to Cite

Zhou, K., Yang, Y., Hospedales, T., & Xiang, T. (2020). Deep Domain-Adversarial Image Generation for Domain Generalisation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 13025-13032. https://doi.org/10.1609/aaai.v34i07.7003

Issue

Section

AAAI Technical Track: Vision