Multi-Attribute Transfer via Disentangled Representation


  • Jianfu Zhang Shanghai Jiao Tong University
  • Yuanyuan Huang Shanghai Jiao Tong University
  • Yaoyi Li Shanghai Jiao Tong University
  • Weijie Zhao Versa
  • Liqing Zhang Shanghai Jiao Tong University



Recent studies show significant progress in image-to-image translation task, especially facilitated by Generative Adversarial Networks. They can synthesize highly realistic images and alter the attribute labels for the images. However, these works employ attribute vectors to specify the target domain which diminishes image-level attribute diversity. In this paper, we propose a novel model formulating disentangled representations by projecting images to latent units, grouped feature channels of Convolutional Neural Network, to disassemble the information between different attributes. Thanks to disentangled representation, we can transfer attributes according to the attribute labels and moreover retain the diversity beyond the labels, namely, the styles inside each image. This is achieved by specifying some attributes and swapping the corresponding latent units to “swap” the attributes appearance, or applying channel-wise interpolation to blend different attributes. To verify the motivation of our proposed model, we train and evaluate our model on face dataset CelebA. Furthermore, the evaluation of another facial expression dataset RaFD demonstrates the generalizability of our proposed model.




How to Cite

Zhang, J., Huang, Y., Li, Y., Zhao, W., & Zhang, L. (2019). Multi-Attribute Transfer via Disentangled Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9195-9202.



AAAI Technical Track: Vision