Learning Adversarially Robust Sparse Networks via Weight Reparameterization
DOI:
https://doi.org/10.1609/aaai.v37i7.26027Keywords:
ML: Adversarial Learning & Robustness, CV: Adversarial Attacks & Robustness, ML: Learning on the Edge & Model CompressionAbstract
Although increasing model size can enhance the adversarial robustness of deep neural networks, in resource-constrained environments, there exist critical sparsity constraints. While the recent robust pruning technologies show promising direction to obtain adversarially robust sparse networks, they perform poorly with high sparsity. In this work, we bridge this performance gap by reparameterizing network parameters to simultaneously learn the sparse structure and the robustness. Specifically, we introduce Twin-Rep, which reparameterizes original weights into the product of two factors during training and performs pruning on the reparameterized weights to satisfy the target sparsity constraint. Twin-Rep implicitly adds the sparsity constraint without changing the robust training objective, thus can enhance robustness under high sparsity. We also introduce another variant of weight reparameterization for better channel pruning. When inferring, we restore the original weight structure to obtain compact and robust networks. Extensive experiments on diverse datasets demonstrate that our method achieves state-of-the-art results, outperforming the current sparse robust training method and robustness-aware pruning method. Our code is available at https://github.com/UCAS-LCH/Twin-Rep.Downloads
Published
2023-06-26
How to Cite
Li, C., Qiu, Q., Zhang, Z., Guo, J., & Cheng, X. (2023). Learning Adversarially Robust Sparse Networks via Weight Reparameterization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8527-8535. https://doi.org/10.1609/aaai.v37i7.26027
Issue
Section
AAAI Technical Track on Machine Learning II