ContraFeat: Contrasting Deep Features for Semantic Discovery


  • Xinqi Zhu University of Sydney
  • Chang Xu University of Sydney
  • Dacheng Tao University of Sydney



ML: Deep Generative Models & Autoencoders, CV: Representation Learning for Vision, ML: Representation Learning, ML: Unsupervised & Self-Supervised Learning


StyleGAN has shown strong potential for disentangled semantic control, thanks to its special design of multi-layer intermediate latent variables. However, existing semantic discovery methods on StyleGAN rely on manual selection of modified latent layers to obtain satisfactory manipulation results, which is tedious and demanding. In this paper, we propose a model that automates this process and achieves state-of-the-art semantic discovery performance. The model consists of an attention-equipped navigator module and losses contrasting deep-feature changes. We propose two model variants, with one contrasting samples in a binary manner, and another one contrasting samples with learned prototype variation patterns. The proposed losses are computed with pretrained deep features, based on our assumption that the features implicitly possess the desired semantic variation structure including consistency and orthogonality. Additionally, we design two metrics to quantitatively evaluate the performance of semantic discovery methods on FFHQ dataset, and also show that disentangled representations can be derived via a simple training process. Experimentally, we show that our models achieve state-of-the-art semantic discovery results without relying on layer-wise manual selection, and these discovered semantics can be used to manipulate real-world images.




How to Cite

Zhu, X., Xu, C., & Tao, D. (2023). ContraFeat: Contrasting Deep Features for Semantic Discovery. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11470-11478.



AAAI Technical Track on Machine Learning IV