Dynamically Grown Generative Adversarial Networks

Authors

  • Lanlan Liu University of Michigan, Ann Arbor
  • Yuting Zhang Amazon Web Services
  • Jia Deng Princeton University
  • Stefano Soatto Amazon Web Services

Keywords:

Neural Generative Models & Autoencoders

Abstract

Recent work introduced progressive network growing as a promising way to ease the training for large GANs, but the model design and architecture-growing strategy still remain under-explored and needs manual design for different image data. In this paper, we propose a method to dynamically grow a GAN during training, optimizing the network architecture and its parameters together with automation. The method embeds architecture search techniques as an interleaving step with gradient-based training to periodically seek the optimal architecture-growing strategy for the generator and discriminator. It enjoys the benefits of both eased training because of progressive growing and improved performance because of broader architecture design space. Experimental results demonstrate new state-of-the-art of image generation. Observations in the search procedure also provide constructive insights into the GAN model design such as generator-discriminator balance and convolutional layer choices.

Downloads

Published

2021-05-18

How to Cite

Liu, L., Zhang, Y., Deng, J., & Soatto, S. (2021). Dynamically Grown Generative Adversarial Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8680-8687. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17052

Issue

Section

AAAI Technical Track on Machine Learning III