Attack Deterministic Conditional Image Generative Models for Diverse and Controllable Generation

Authors

  • Tianyi Chu Zhejiang University
  • Wei Xing Zhejiang University
  • Jiafu Chen Zhejiang University
  • Zhizhong Wang Zhejiang University
  • Jiakai Sun Zhejiang University
  • Lei Zhao Zhejiang University
  • Haibo Chen Nanjing University of Science and Technology
  • Huaizhong Lin Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v38i2.27900

Keywords:

CV: Applications, CV: Language and Vision

Abstract

Existing generative adversarial network (GAN) based conditional image generative models typically produce fixed output for the same conditional input, which is unreasonable for highly subjective tasks, such as large-mask image inpainting or style transfer. On the other hand, GAN-based diverse image generative methods require retraining/fine-tuning the network or designing complex noise injection functions, which is computationally expensive, task-specific, or struggle to generate high-quality results. Given that many deterministic conditional image generative models have been able to produce high-quality yet fixed results, we raise an intriguing question: is it possible for pre-trained deterministic conditional image generative models to generate diverse results without changing network structures or parameters? To answer this question, we re-examine the conditional image generation tasks from the perspective of adversarial attack and propose a simple and efficient plug-in projected gradient descent (PGD) like method for diverse and controllable image generation. The key idea is attacking the pre-trained deterministic generative models by adding a micro perturbation to the input condition. In this way, diverse results can be generated without any adjustment of network structures or fine-tuning of the pre-trained models. In addition, we can also control the diverse results to be generated by specifying the attack direction according to a reference text or image. Our work opens the door to applying adversarial attack to low-level vision tasks, and experiments on various conditional image generation tasks demonstrate the effectiveness and superiority of the proposed method.

Published

2024-03-24

How to Cite

Chu, T., Xing, W., Chen, J., Wang, Z., Sun, J., Zhao, L., Chen, H., & Lin, H. (2024). Attack Deterministic Conditional Image Generative Models for Diverse and Controllable Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 1362-1370. https://doi.org/10.1609/aaai.v38i2.27900

Issue

Section

AAAI Technical Track on Computer Vision I