Once and for All: Universal Transferable Adversarial Perturbation against Deep Hashing-Based Facial Image Retrieval

Authors

  • Long Tang Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education , School of Cyber Science and Engineering ,Wuhan University
  • Dengpan Ye Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education , School of Cyber Science and Engineering ,Wuhan University
  • Yunna Lv Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education , School of Cyber Science and Engineering ,Wuhan University
  • Chuanxi Chen Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education , School of Cyber Science and Engineering ,Wuhan University
  • Yunming Zhang Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education , School of Cyber Science and Engineering ,Wuhan University

DOI:

https://doi.org/10.1609/aaai.v38i6.28319

Keywords:

CV: Image and Video Retrieval, CV: Adversarial Attacks & Robustness

Abstract

Deep Hashing (DH)-based image retrieval has been widely applied to face-matching systems due to its accuracy and efficiency. However, this convenience comes with an increased risk of privacy leakage. DH models inherit the vulnerability to adversarial attacks, which can be used to prevent the retrieval of private images. Existing adversarial attacks against DH typically target a single image or a specific class of images, lacking universal adversarial perturbation for the entire hash dataset. In this paper, we propose the first universal transferable adversarial perturbation against DH-based facial image retrieval, a single perturbation can protect all images. Specifically, we explore the relationship between clusters learned by different DH models and define the optimization objective of universal perturbation as leaving from the overall hash center. To mitigate the challenge of single-objective optimization, we randomly obtain sub-cluster centers and further propose sub-task-based meta-learning to aid in overall optimization. We test our method with popular facial datasets and DH models, indicating impressive cross-image, -identity, -model, and -scheme universal anti-retrieval performance. Compared to state-of-the-art methods, our performance is competitive in white-box settings and exhibits significant improvements of 10%-70% in transferability in all black-box settings.

Published

2024-03-24

How to Cite

Tang, L., Ye, D., Lv, Y., Chen, C., & Zhang, Y. (2024). Once and for All: Universal Transferable Adversarial Perturbation against Deep Hashing-Based Facial Image Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5136-5144. https://doi.org/10.1609/aaai.v38i6.28319

Issue

Section

AAAI Technical Track on Computer Vision V