AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows


  • Aditya Grover Stanford
  • Christopher Chute Stanford
  • Rui Shu Stanford
  • Zhangjie Cao Stanford
  • Stefano Ermon Stanford



Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain. Variants of this problem have been studied in many contexts, such as cross-domain translation and domain adaptation. We propose AlignFlow, a generative modeling framework that models each domain via a normalizing flow. The use of normalizing flows allows for a) flexibility in specifying learning objectives via adversarial training, maximum likelihood estimation, or a hybrid of the two methods; and b) learning and exact inference of a shared representation in the latent space of the generative model. We derive a uniform set of conditions under which AlignFlow is marginally-consistent for the different learning objectives. Furthermore, we show that AlignFlow guarantees exact cycle consistency in mapping datapoints from a source domain to target and back to the source domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image translation and unsupervised domain adaptation and can be used to simultaneously interpolate across the various domains using the learned representation.




How to Cite

Grover, A., Chute, C., Shu, R., Cao, Z., & Ermon, S. (2020). AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4028-4035.



AAAI Technical Track: Machine Learning