DSD²: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?

Authors

  • Victor Quétu LTCI, Télécom Paris, Institut Polytechnique de Paris, France
  • Enzo Tartaglione LTCI, Télécom Paris, Institut Polytechnique de Paris, France

DOI:

https://doi.org/10.1609/aaai.v38i13.29393

Keywords:

ML: Deep Learning Algorithms, ML: Learning on the Edge & Model Compression

Abstract

Neoteric works have shown that modern deep learning models can exhibit a sparse double descent phenomenon. Indeed, as the sparsity of the model increases, the test performance first worsens since the model is overfitting the training data; then, the overfitting reduces, leading to an improvement in performance, and finally, the model begins to forget critical information, resulting in underfitting. Such a behavior prevents using traditional early stop criteria. In this work, we have three key contributions. First, we propose a learning framework that avoids such a phenomenon and improves generalization. Second, we introduce an entropy measure providing more insights into the insurgence of this phenomenon and enabling the use of traditional stop criteria. Third, we provide a comprehensive quantitative analysis of contingent factors such as re-initialization methods, model width and depth, and dataset noise. The contributions are supported by empirical evidence in typical setups. Our code is available at https://github.com/VGCQ/DSD2.

Published

2024-03-24

How to Cite

Quétu, V., & Tartaglione, E. (2024). DSD²: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14749-14757. https://doi.org/10.1609/aaai.v38i13.29393

Issue

Section

AAAI Technical Track on Machine Learning IV