Multi-Attribute Transfer via Disentangled Representation

Authors

  • Jianfu Zhang Shanghai Jiao Tong University
  • Yuanyuan Huang Shanghai Jiao Tong University
  • Yaoyi Li Shanghai Jiao Tong University
  • Weijie Zhao Versa
  • Liqing Zhang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019195

Abstract

Recent studies show significant progress in image-to-image translation task, especially facilitated by Generative Adversarial Networks. They can synthesize highly realistic images and alter the attribute labels for the images. However, these works employ attribute vectors to specify the target domain which diminishes image-level attribute diversity. In this paper, we propose a novel model formulating disentangled representations by projecting images to latent units, grouped feature channels of Convolutional Neural Network, to disassemble the information between different attributes. Thanks to disentangled representation, we can transfer attributes according to the attribute labels and moreover retain the diversity beyond the labels, namely, the styles inside each image. This is achieved by specifying some attributes and swapping the corresponding latent units to “swap” the attributes appearance, or applying channel-wise interpolation to blend different attributes. To verify the motivation of our proposed model, we train and evaluate our model on face dataset CelebA. Furthermore, the evaluation of another facial expression dataset RaFD demonstrates the generalizability of our proposed model.

Downloads

Published

2019-07-17

How to Cite

Zhang, J., Huang, Y., Li, Y., Zhao, W., & Zhang, L. (2019). Multi-Attribute Transfer via Disentangled Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9195-9202. https://doi.org/10.1609/aaai.v33i01.33019195

Issue

Section

AAAI Technical Track: Vision