Deep Frequency Principle Towards Understanding Why Deeper Learning Is Faster

Authors

  • Zhiqin John Xu Shanghai Jiao Tong University
  • Hanxu Zhou Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v35i12.17261

Keywords:

Optimization, Learning Theory

Abstract

Understanding the effect of depth in deep learning is a critical problem. In this work, we utilize the Fourier analysis to empirically provide a promising mechanism to understand why feedforward deeper learning is faster. To this end, we separate a deep neural network, trained by normal stochastic gradient descent, into two parts during analysis, i.e., a pre-condition component and a learning component, in which the output of the pre-condition one is the input of the learning one. We use a filtering method to characterize the frequency distribution of a high-dimensional function. Based on experiments of deep networks and real dataset, we propose a deep frequency principle, that is, the effective target function for a deeper hidden layer biases towards lower frequency during the training. Therefore, the learning component effectively learns a lower frequency function if the pre-condition component has more layers. Due to the well-studied frequency principle, i.e., deep neural networks learn lower frequency functions faster, the deep frequency principle provides a reasonable explanation to why deeper learning is faster. We believe these empirical studies would be valuable for future theoretical studies of the effect of depth in deep learning.

Downloads

Published

2021-05-18

How to Cite

Xu, Z. J., & Zhou, H. (2021). Deep Frequency Principle Towards Understanding Why Deeper Learning Is Faster. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10541-10550. https://doi.org/10.1609/aaai.v35i12.17261

Issue

Section

AAAI Technical Track on Machine Learning V