Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse
Keywords:Learning & Optimization for CV, Object Detection & Categorization, Other Foundations of Computer Vision
AbstractNormalization operations are essential for state-of-the-art neural networks and enable us to train a network from scratch with a large learning rate (LR). We attempt to explain the real effect of Batch Normalization (BN) from the perspective of variance transmission by investigating the relationship between BN and Weights Normalization (WN). In this work, we demonstrate that the problem of the shift of the average gradient will amplify the variance of every convolutional (conv) layer. We propose Parametric Weights Standardization (PWS), a fast and robust to mini-batch size module used for conv filters, to solve the shift of the average gradient. PWS can provide the speed-up of BN. Besides, it has less computation and does not change the output of a conv layer. PWS enables the network to converge fast without normalizing the outputs. This result enhances the persuasiveness of the shift of the average gradient and explains why BN works from the perspective of variance transmission. The code and appendix will be made available on https://github.com/lyxzzz/PWSConv.
How to Cite
Liu, Y., Ge, J., Li, C., & Gui, J. (2021). Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2216-2224. https://doi.org/10.1609/aaai.v35i3.16320
AAAI Technical Track on Computer Vision II