Adversarial Robust Safeguard for Evading Deep Facial Manipulation

Authors

  • Jiazhi Guan DCST, BNRist, Tsinghua University
  • Yi Zhao Beijing Institute of Technology
  • Zhuoer Xu Ant Group
  • Changhua Meng Ant Group
  • Ke Xu DCST, BNRist, Tsinghua University Zhongguancun Laboratory
  • Youjian Zhao DCST, BNRist, Tsinghua University Zhongguancun Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i1.27762

Keywords:

APP: Security, APP: Misinformation & Fake News

Abstract

The non-consensual exploitation of facial manipulation has emerged as a pressing societal concern. In tandem with the identification of such fake content, recent research endeavors have advocated countering manipulation techniques through proactive interventions, specifically the incorporation of adversarial noise to impede the manipulation in advance. Nevertheless, with insufficient consideration of robustness, we show that current methods falter in providing protection after simple perturbations, e.g., blur. In addition, traditional optimization-based methods face limitations in scalability as they struggle to accommodate the substantial expansion of data volume, a consequence of the time-intensive iterative pipeline. To solve these challenges, we propose a learning-based model, Adversarial Robust Safeguard (ARS), to generate desirable protection noise in a single forward process, concurrently exhibiting a heightened resistance against prevalent perturbations. Specifically, our method involves a two-way protection design, characterized by a basic protection component responsible for generating efficacious noise features, coupled with robust protection for further enhancement. In robust protection, we first fuse image features with spatially duplicated noise embedding, thereby accounting for inherent information redundancy. Subsequently, a combination comprising a differentiable perturbation module and an adversarial network is devised to simulate potential information degradation during the training process. To evaluate it, we conduct experiments on four manipulation methods and compare recent works comprehensively. The results of our method exhibit good visual effects with pronounced robustness against varied perturbations at different levels.

Published

2024-03-25

How to Cite

Guan, J., Zhao, Y., Xu, Z., Meng, C., Xu, K., & Zhao, Y. (2024). Adversarial Robust Safeguard for Evading Deep Facial Manipulation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 118-126. https://doi.org/10.1609/aaai.v38i1.27762

Issue

Section

AAAI Technical Track on Application Domains