High Fidelity GAN Inversion via Prior Multi-Subspace Feature Composition

Authors

  • Guanyue Li South China University of Technology
  • Qianfen Jiao City University of Hong Kong
  • Sheng Qian Huawei Device Company Limited
  • Si Wu South China University of Technology City University of Hong Kong
  • Hau-San Wong City University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v35i9.17017

Keywords:

Unsupervised & Self-Supervised Learning, Applications, Adversarial Learning & Robustness

Abstract

Generative Adversarial Networks (GANs) have shown impressive gains in image synthesis. GAN inversion was recently studied to understand and utilize the knowledge it learns, where a real image is inverted back to a latent code and can thus be reconstructed by the generator. Although increasing the number of latent codes can improve inversion quality to a certain extent, we find that important details may still be neglected when performing feature composition over all the intermediate feature channels. To address this issue, we propose a Prior multi-Subspace Feature Composition (PmSFC) approach for high-fidelity inversion. Considering that the intermediate features are highly correlated with each other, we incorporate a self-expressive layer in the generator to discover meaningful subspaces. In this case, the features at a channel can be expressed as a linear combination of those at other channels in the same subspace. We perform feature composition separately in the subspaces. The semantic differences between them benefit the inversion quality, since the inversion process is regularized based on different aspects of semantics. In the experiments, the superior performance of PmSFC demonstrates the effectiveness of prior subspaces in facilitating GAN inversion together with extended applications in visual manipulation.

Downloads

Published

2021-05-18

How to Cite

Li, G., Jiao, Q., Qian, S., Wu, S., & Wong, H.-S. (2021). High Fidelity GAN Inversion via Prior Multi-Subspace Feature Composition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8366-8374. https://doi.org/10.1609/aaai.v35i9.17017

Issue

Section

AAAI Technical Track on Machine Learning II