Double-Descent Curves in Neural Networks: A New Perspective Using Gaussian Processes

Authors

  • Ouns El Harzli University of Oxford
  • Bernardo Cuenca Grau University of Oxford
  • Guillermo Valle-Pérez University of Oxford
  • Ard A. Louis University of Oxford

DOI:

https://doi.org/10.1609/aaai.v38i10.29071

Keywords:

ML: Deep Learning Theory, ML: Kernel Methods

Abstract

Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterized regime. In this paper, we use techniques from random matrix theory to characterize the spectral distribution of the empirical feature covariance matrix as a width-dependent perturbation of the spectrum of the neural network Gaussian process (NNGP) kernel, thus establishing a novel connection between the NNGP literature and the random matrix theory literature in the context of neural networks. Our analytical expressions allow us to explore the generalisation behavior of the corresponding kernel and GP regression. Furthermore, they offer a new interpretation of double-descent in terms of the discrepancy between the width-dependent empirical kernel and the width-independent NNGP kernel.

Downloads

Published

2024-03-24

How to Cite

El Harzli, O., Cuenca Grau, B., Valle-Pérez, G., & Louis, A. A. (2024). Double-Descent Curves in Neural Networks: A New Perspective Using Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11856-11864. https://doi.org/10.1609/aaai.v38i10.29071

Issue

Section

AAAI Technical Track on Machine Learning I