Balanced Sparsity for Efficient DNN Inference on GPU

Authors

  • Zhuliang Yao Tsinghua University
  • Shijie Cao Harbin Institute of Technology
  • Wencong Xiao Beihang University
  • Chen Zhang Microsoft Research Asia
  • Lanshun Nie Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33015676

Abstract

In trained deep neural networks, unstructured pruning can reduce redundant weights to lower storage cost. However, it requires the customization of hardwares to speed up practical inference. Another trend accelerates sparse model inference on general-purpose hardwares by adopting coarse-grained sparsity to prune or regularize consecutive weights for efficient computation. But this method often sacrifices model accuracy. In this paper, we propose a novel fine-grained sparsity approach, Balanced Sparsity, to achieve high model accuracy with commercial hardwares efficiently. Our approach adapts to high parallelism property of GPU, showing incredible potential for sparsity in the widely deployment of deep learning services. Experiment results show that Balanced Sparsity achieves up to 3.1x practical speedup for model inference on GPU, while retains the same high model accuracy as finegrained sparsity.

Downloads

Published

2019-07-17

How to Cite

Yao, Z., Cao, S., Xiao, W., Zhang, C., & Nie, L. (2019). Balanced Sparsity for Efficient DNN Inference on GPU. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5676-5683. https://doi.org/10.1609/aaai.v33i01.33015676

Issue

Section

AAAI Technical Track: Machine Learning