Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks

Authors

  • Kaleab Belay Addis Ababa Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v36i11.21699

Keywords:

Robotics, AI Architectures, Multi-Robot Systems, Machine Learning

Abstract

Deep Neural Networks have memory and computational demands that often render them difficult to use in low-resource environments. Also, highly dense networks are over-parameterized and thus prone to overfitting. To address these problems, we introduce a novel algorithm that prunes (sparsifies) weights from the network by taking into account their magnitudes and gradients taken against a validation dataset. Unlike existing pruning methods, our method does not require the network model to be retrained once initial training is completed. On the CIFAR-10 dataset, our method reduced the number of paramters of MobileNet by a factor of 9X, from 14 million to 1.5 million, with just a 3.8% drop in accuracy.

Downloads

Published

2022-06-28

How to Cite

Belay, K. (2022). Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13126-13127. https://doi.org/10.1609/aaai.v36i11.21699