Adversarially Robust Distillation

Authors

  • Micah Goldblum University of Maryland
  • Liam Fowl University of Maryland
  • Soheil Feizi University of Maryland
  • Tom Goldstein University of Maryland

DOI:

https://doi.org/10.1609/aaai.v34i04.5816

Abstract

Knowledge distillation is effective for producing small, high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks. This paper studies how adversarial robustness transfers from teacher to student during knowledge distillation. We find that a large amount of robustness may be inherited by the student even when distilled on only clean images. Second, we introduce Adversarially Robust Distillation (ARD) for distilling robustness onto student networks. In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student. In our experiments, we find that ARD student models decisively outperform adversarially trained networks of identical architecture in terms of robust accuracy, surpassing state-of-the-art methods on standard robustness benchmarks. Finally, we adapt recent fast adversarial training methods to ARD for accelerated robust distillation.

Downloads

Published

2020-04-03

How to Cite

Goldblum, M., Fowl, L., Feizi, S., & Goldstein, T. (2020). Adversarially Robust Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3996-4003. https://doi.org/10.1609/aaai.v34i04.5816

Issue

Section

AAAI Technical Track: Machine Learning