ESPACE: Accelerating Convolutional Neural Networks via Eliminating Spatial and Channel Redundancy

Authors

  • Shaohui Lin Xiamen University
  • Rongrong Ji Xiamen University
  • Chao Chen Xiamen University
  • Feiyue Huang Tencent Technology (Shanghai) Co., Ltd

DOI:

https://doi.org/10.1609/aaai.v31i1.10756

Abstract

Recent years have witnessed an extensive popularity of convolutional neural networks (CNNs) in various computer vision and artificial intelligence applications. However, the performance gains have come at a cost of substantially intensive computation complexity, which prohibits its usage inresource-limited applications like mobile or embedded devices. While increasing attention has been paid to the acceleration of internal network structure, the redundancy of visual input is rarely considered. In this paper, we make the first attempt of reducing spatial and channel redundancy directly from the visual input for CNNs acceleration. The proposed method, termed ESPACE (Elimination of SPAtial and Channel rEdundancy), works by the following three steps: First, the 3D channel redundancy of convolutional layers is reduced by a set of low-rank approximation of convolutional filters. Second, a novel mask based selective processing scheme is proposed, which further speedups the convolution operations via skipping unsalient spatial locations of the visual input. Third, the accelerated network is fine-tuned using the training data via back-propagation. The proposed method is evaluated on ImageNet 2012 with implementations on two widely adopted CNNs, i.e. AlexNet and GoogLeNet. In comparison to several recent methods of CNN acceleration, the proposed scheme has demonstrated new state-of-the-art acceleration performance by a factor of 5.48* and 4.12* speedup on AlexNet and GoogLeNet, respectively, with a minimal decrease in classification accuracy.

Downloads

Published

2017-02-12

How to Cite

Lin, S., Ji, R., Chen, C., & Huang, F. (2017). ESPACE: Accelerating Convolutional Neural Networks via Eliminating Spatial and Channel Redundancy. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10756

Issue

Section

Main Track: Machine Learning Applications