Learning to Attack Real-World Models for Person Re-identification via Virtual-Guided Meta-Learning

Authors

  • Fengxiang Yang Xiamen University
  • Zhun Zhong University of Trento
  • Hong Liu National Institute of Informatics
  • Zheng Wang National Institute of Informatics
  • Zhiming Luo Xiamen University
  • Shaozi Li Xiamen University
  • Nicu Sebe University of Trento Huawei Research
  • Shin'ichi Satoh National Institute of Informatics

Keywords:

Image and Video Retrieval

Abstract

Recent advances in person re-identification (re-ID) have led to impressive retrieval accuracy. However, existing re-ID models are challenged by the adversarial examples crafted by adding quasi-imperceptible perturbations. Moreover, re-ID systems face the domain shift issue that training and testing domains are not consistent. In this study, we argue that learning powerful attackers with high universality that works well on unseen domains is an important step in promoting the robustness of re-ID systems. Therefore, we introduce a novel universal attack algorithm called ``MetaAttack'' for person re-ID. MetaAttack can mislead re-ID models on unseen domains by a universal adversarial perturbation. Specifically, to capture common patterns across different domains, we propose a meta-learning scheme to seek the universal perturbation via the gradient interaction between meta-train and meta-test formed by two datasets. We also take advantage of a virtual dataset (PersonX), instead of real ones, to conduct meta-test. This scheme not only enables us to learn with more comprehensive variation factors but also mitigates the negative effects caused by biased factors of real datasets. Experiments on three large-scale re-ID datasets demonstrate the effectiveness of our method in attacking re-ID models on unseen domains. Our final visualization results reveal some new properties of existing re-ID systems, which can guide us in designing a more robust re-ID model. Code and supplemental material are available at \url{https://github.com/FlyingRoastDuck/MetaAttack_AAAI21}.

Downloads

Published

2021-05-18

How to Cite

Yang, F., Zhong, Z., Liu, H., Wang, Z., Luo, Z., Li, S., Sebe, N., & Satoh, S. (2021). Learning to Attack Real-World Models for Person Re-identification via Virtual-Guided Meta-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3128-3135. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16422

Issue

Section

AAAI Technical Track on Computer Vision III