Sparsity Aware Normalization for GANs
Keywords:(Deep) Neural Network Algorithms
AbstractGenerative adversarial networks (GANs) are known to benefit from regularization or normalization of their critic (discriminator) network during training. In this paper, we analyze the popular spectral normalization scheme, find a significant drawback and introduce sparsity aware normalization (SAN), a new alternative approach for stabilizing GAN training. As opposed to other normalization methods, our approach explicitly accounts for the sparse nature of the feature maps in convolutional networks with ReLU activations. We illustrate the effectiveness of our method through extensive experiments with a variety of network architectures. As we show, sparsity is particularly dominant in critics used for image-to-image translation settings. In these cases our approach improves upon existing methods, in less training epochs and with smaller capacity networks, while requiring practically no computational overhead.
How to Cite
Kligvasser, I., & Michaeli, T. (2021). Sparsity Aware Normalization for GANs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8181-8190. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16996
AAAI Technical Track on Machine Learning II