Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model

Authors

  • Decheng Liu School of Cyber Engineering, Xidian University, Xi’an, China Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai, China
  • Xijun Wang School of Artifical Intelligence, Xidian University, Xi’an, China
  • Chunlei Peng School of Cyber Engineering, Xidian University, Xi’an, China Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai, China
  • Nannan Wang School of Telecommunications Engineering, Xidian University, Xi’an, China
  • Ruimin Hu Hangzhou Institute of Technology, Xidian University, Xi’an, China
  • Xinbo Gao Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China

DOI:

https://doi.org/10.1609/aaai.v38i4.28147

Keywords:

CV: Biometrics, Face, Gesture & Pose, CV: Adversarial Attacks & Robustness

Abstract

Adversarial attacks involve adding perturbations to the source image to cause misclassification by the target model, which demonstrates the potential of attacking face recognition models. Existing adversarial face image generation methods still can’t achieve satisfactory performance because of low transferability and high detectability. In this paper, we propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space, which utilizes strong inpainting capabilities of the latent diffusion model to generate realistic adversarial images. Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings. The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness. Extensive qualitative and quantitative experiments on the public FFHQ and CelebA-HQ datasets prove the proposed method achieves superior performance compared with the state-of-the-art methods without an extra generative model training process. The source code is available at https://github.com/kopper-xdu/Adv-Diffusion.

Published

2024-03-24

How to Cite

Liu, D., Wang, X., Peng, C., Wang, N., Hu, R., & Gao, X. (2024). Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3585-3593. https://doi.org/10.1609/aaai.v38i4.28147

Issue

Section

AAAI Technical Track on Computer Vision III