Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation
DOI:
https://doi.org/10.1609/aaai.v38i6.28390Keywords:
CV: Applications, ML: ApplicationsAbstract
Adversarial Robustness Distillation (ARD) is a promising task to solve the issue of limited adversarial robustness of small capacity models while optimizing the expensive computational costs of Adversarial Training (AT). Despite the good robust performance, the existing ARD methods are still impractical to deploy in natural high-security scenes due to these methods rely entirely on original or publicly available data with a similar distribution. In fact, these data are almost always private, specific, and distinctive for scenes that require high robustness. To tackle these issues, we propose a challenging but significant task called Data-Free Adversarial Robustness Distillation (DFARD), which aims to train small, easily deployable, robust models without relying on data. We demonstrate that the challenge lies in the lower upper bound of knowledge transfer information, making it crucial to mining and transferring knowledge more efficiently. Inspired by human education, we design a plug-and-play Interactive Temperature Adjustment (ITA) strategy to improve the efficiency of knowledge transfer and propose an Adaptive Generator Balance (AGB) module to retain more data information. Our method uses adaptive hyperparameters to avoid a large number of parameter tuning, which significantly outperforms the combination of existing techniques. Meanwhile, our method achieves stable and reliable performance on multiple benchmarks.Downloads
Published
2024-03-24
How to Cite
Wang, Y., Chen, Z., Yang, D., Guo, P., Jiang, K., Zhang, W., & Qi, L. (2024). Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5776-5784. https://doi.org/10.1609/aaai.v38i6.28390
Issue
Section
AAAI Technical Track on Computer Vision V