Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win

Authors

  • Utku Evci Google AI
  • Yani Ioannou University of Calgary
  • Cem Keskin Facebook
  • Yann Dauphin Google AI

DOI:

https://doi.org/10.1609/aaai.v36i6.20611

Keywords:

Machine Learning (ML)

Abstract

Sparse Neural Networks (NNs) can match the generalization of dense NNs using a fraction of the compute/storage for inference, and have the potential to enable efficient training. However, naively training unstructured sparse NNs from random initialization results in significantly worse generalization, with the notable exceptions of Lottery Tickets (LTs) and Dynamic Sparse Training (DST). In this work, we attempt to answer: (1) why training unstructured sparse networks from random initialization performs poorly and; (2) what makes LTs and DST the exceptions? We show that sparse NNs have poor gradient flow at initialization and propose a modified initialization for unstructured connectivity. Furthermore, we find that DST methods significantly improve gradient flow during training over traditional sparse training methods. Finally, we show that LTs do not improve gradient flow, rather their success lies in re-learning the pruning solution they are derived from — however, this comes at the cost of learning novel solutions.

Downloads

Published

2022-06-28

How to Cite

Evci, U., Ioannou, Y., Keskin, C., & Dauphin, Y. (2022). Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6577-6586. https://doi.org/10.1609/aaai.v36i6.20611

Issue

Section

AAAI Technical Track on Machine Learning I