ImageNet Pre-training Also Transfers Non-robustness

Authors

  • Jiaming Zhang Beijing Jiaotong University
  • Jitao Sang Beijing Jiaotong University Peng Cheng Lab
  • Qi Yi Beijing Jiaotong University
  • Yunfan Yang Beijing Jiaotong University
  • Huiwen Dong Beijing Normal University
  • Jian Yu Beijing Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v37i3.25452

Keywords:

CV: Adversarial Attacks & Robustness, ML: Adversarial Learning & Robustness, CV: Representation Learning for Vision, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

ImageNet pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized contribution to generalization, we observed in this study that ImageNet pre-training also transfers adversarial non-robustness from pre-trained model into fine-tuned model in the downstream classification tasks. We first conducted experiments on various datasets and network backbones to uncover the adversarial non-robustness in fine-tuned model. Further analysis was conducted on examining the learned knowledge of fine-tuned model and standard model, and revealed that the reason leading to the non-robustness is the non-robust features transferred from ImageNet pre-trained model. Finally, we analyzed the preference for feature learning of the pre-trained model, explored the factors influencing robustness, and introduced a simple robust ImageNet pre-training solution. Our code is available at https://github.com/jiamingzhang94/ImageNet-Pretraining-transfers-non-robustness.

Downloads

Published

2023-06-26

How to Cite

Zhang, J., Sang, J., Yi, Q., Yang, Y., Dong, H., & Yu, J. (2023). ImageNet Pre-training Also Transfers Non-robustness. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3436-3444. https://doi.org/10.1609/aaai.v37i3.25452

Issue

Section

AAAI Technical Track on Computer Vision III