Dynamic Structure Pruning for Compressing CNNs
DOI:
https://doi.org/10.1609/aaai.v37i8.26127Keywords:
ML: Learning on the Edge & Model CompressionAbstract
Structure pruning is an effective method to compress and accelerate neural networks. While filter and channel pruning are preferable to other structure pruning methods in terms of realistic acceleration and hardware compatibility, pruning methods with a finer granularity, such as intra-channel pruning, are expected to be capable of yielding more compact and computationally efficient networks. Typical intra-channel pruning methods utilize a static and hand-crafted pruning granularity due to a large search space, which leaves room for improvement in their pruning performance. In this work, we introduce a novel structure pruning method, termed as dynamic structure pruning, to identify optimal pruning granularities for intra-channel pruning. In contrast to existing intra-channel pruning methods, the proposed method automatically optimizes dynamic pruning granularities in each layer while training deep neural networks. To achieve this, we propose a differentiable group learning method designed to efficiently learn a pruning granularity based on gradient-based learning of filter groups. The experimental results show that dynamic structure pruning achieves state-of-the-art pruning performance and better realistic acceleration on a GPU compared with channel pruning. In particular, it reduces the FLOPs of ResNet50 by 71.85% without accuracy degradation on the ImageNet dataset. Our code is available at https://github.com/irishev/DSP.Downloads
Published
2023-06-26
How to Cite
Park, J.-H., Kim, Y., Kim, J., Choi, J.-Y., & Lee, S. (2023). Dynamic Structure Pruning for Compressing CNNs. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9408-9416. https://doi.org/10.1609/aaai.v37i8.26127
Issue
Section
AAAI Technical Track on Machine Learning III