Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse

Authors

  • Yuxiang Liu Nanjing University
  • Jidong Ge Nanjing University
  • Chuanyi Li Nanjing University
  • Jie Gui Southeast University

DOI:

https://doi.org/10.1609/aaai.v35i3.16320

Keywords:

Learning & Optimization for CV, Object Detection & Categorization, Other Foundations of Computer Vision

Abstract

Normalization operations are essential for state-of-the-art neural networks and enable us to train a network from scratch with a large learning rate (LR). We attempt to explain the real effect of Batch Normalization (BN) from the perspective of variance transmission by investigating the relationship between BN and Weights Normalization (WN). In this work, we demonstrate that the problem of the shift of the average gradient will amplify the variance of every convolutional (conv) layer. We propose Parametric Weights Standardization (PWS), a fast and robust to mini-batch size module used for conv filters, to solve the shift of the average gradient. PWS can provide the speed-up of BN. Besides, it has less computation and does not change the output of a conv layer. PWS enables the network to converge fast without normalizing the outputs. This result enhances the persuasiveness of the shift of the average gradient and explains why BN works from the perspective of variance transmission. The code and appendix will be made available on https://github.com/lyxzzz/PWSConv.

Downloads

Published

2021-05-18

How to Cite

Liu, Y., Ge, J., Li, C., & Gui, J. (2021). Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2216-2224. https://doi.org/10.1609/aaai.v35i3.16320

Issue

Section

AAAI Technical Track on Computer Vision II