DARB: A Density-Adaptive Regular-Block Pruning for Deep Neural Networks

Authors

  • Ren Ao Northeastern University
  • Zhang Tao Alibaba DAMO Academy
  • Wang Yuhao Alibaba DAMO Academy
  • Lin Sheng Northeastern University
  • Dong Peiyan Northeastern University
  • Chen Yen-kuang Alibaba DAMO Academy
  • Xie Yuan Alibaba DAMO Academy
  • Wang Yanzhi Northeastern University

DOI:

https://doi.org/10.1609/aaai.v34i04.6000

Abstract

The rapidly growing parameter volume of deep neural networks (DNNs) hinders the artificial intelligence applications on resource constrained devices, such as mobile and wearable devices. Neural network pruning, as one of the mainstream model compression techniques, is under extensive study to reduce the model size and thus the amount of computation. And thereby, the state-of-the-art DNNs are able to be deployed on those devices with high runtime energy efficiency. In contrast to irregular pruning that incurs high index storage and decoding overhead, structured pruning techniques have been proposed as the promising solutions. However, prior studies on structured pruning tackle the problem mainly from the perspective of facilitating hardware implementation, without diving into the deep to analyze the characteristics of sparse neural networks. The neglect on the study of sparse neural networks causes inefficient trade-off between regularity and pruning ratio. Consequently, the potential of structurally pruning neural networks is not sufficiently mined.

In this work, we examine the structural characteristics of the irregularly pruned weight matrices, such as the diverse redundancy of different rows, the sensitivity of different rows to pruning, and the position characteristics of retained weights. By leveraging the gained insights as a guidance, we first propose the novel block-max weight masking (BMWM) method, which can effectively retain the salient weights while imposing high regularity to the weight matrix. As a further optimization, we propose a density-adaptive regular-block (DARB) pruning that can effectively take advantage of the intrinsic characteristics of neural networks, and thereby outperform prior structured pruning work with high pruning ratio and decoding efficiency. Our experimental results show that DARB can achieve 13× to 25× pruning ratio, which are 2.8× to 4.3× improvements than the state-of-the-art counterparts on multiple neural network models and tasks. Moreover, DARB can achieve 14.3× decoding efficiency than block pruning with higher pruning ratio.

Downloads

Published

2020-04-03

How to Cite

Ao, R., Tao, Z., Yuhao, W., Sheng, L., Peiyan, D., Yen-kuang, C., Yuan, X., & Yanzhi, W. (2020). DARB: A Density-Adaptive Regular-Block Pruning for Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5495-5502. https://doi.org/10.1609/aaai.v34i04.6000

Issue

Section

AAAI Technical Track: Machine Learning