New Interpretations of Normalization Methods in Deep Learning


  • Jiacheng Sun Huawei Noah’s Ark Lab
  • Xiangyong Cao Xi'an Jiaotong University
  • Hanwen Liang Huawei Noah’s Ark Lab
  • Weiran Huang Huawei Noah’s Ark Lab
  • Zewei Chen Huawei Noah’s Ark Lab
  • Zhenguo Li Huawei Noah’s Ark Lab



In recent years, a variety of normalization methods have been proposed to help training neural networks, such as batch normalization (BN), layer normalization (LN), weight normalization (WN), group normalization (GN), etc. However, some necessary tools to analyze all these normalization methods are lacking. In this paper, we first propose a lemma to define some necessary tools. Then, we use these tools to make a deep analysis on popular normalization methods and obtain the following conclusions: 1) Most of the normalization methods can be interpreted in a unified framework, namely normalizing pre-activations or weights onto a sphere; 2) Since most of the existing normalization methods are scaling invariant, we can conduct optimization on a sphere with scaling symmetry removed, which can help to stabilize the training of network; 3) We prove that training with these normalization methods can make the norm of weights increase, which could cause adversarial vulnerability as it amplifies the attack. Finally, a series of experiments are conducted to verify these claims.




How to Cite

Sun, J., Cao, X., Liang, H., Huang, W., Chen, Z., & Li, Z. (2020). New Interpretations of Normalization Methods in Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5875-5882.



AAAI Technical Track: Machine Learning