Improved Consistency Regularization for GANs

Authors

  • Zhengli Zhao UC Irvine Google
  • Sameer Singh UC Irvine
  • Honglak Lee Google
  • Zizhao Zhang Google
  • Augustus Odena Google
  • Han Zhang Google

Keywords:

Neural Generative Models & Autoencoders, Unsupervised & Self-Supervised Learning, Representation Learning

Abstract

Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator. We improve on this technique in several ways. We first show that consistency regularization can introduce artifacts into the GAN samples and explain how to fix this issue. We then propose several modifications to the consistency regularization procedure designed to improve its performance. We carry out extensive experiments quantifying the benefit of our improvements. For unconditional image synthesis on CIFAR-10 and CelebA, our modifications yield the best known FID scores on various GAN architectures. For conditional image synthesis on CIFAR-10, we improve the state-of-the-art FID score from 11.48 to 9.21. Finally, on ImageNet-2012, we apply our technique to the original BigGAN model and improve the FID from 6.66 to 5.38, which is the best score at that model size.

Downloads

Published

2021-05-18

How to Cite

Zhao, Z., Singh, S., Lee, H., Zhang, Z., Odena, A., & Zhang, H. (2021). Improved Consistency Regularization for GANs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11033-11041. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17317

Issue

Section

AAAI Technical Track on Machine Learning V