Generating Adversarial yet Inconspicuous Patches with a Single Image (Student Abstract)

Authors

  • Jinqi Luo Nanyang Technological University
  • Tao Bai Nanyang Technological University
  • Jun Zhao Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v35i18.17915

Keywords:

Adversarial Example, Adversarial Attack, GAN, Deep Learning, Machine Learning

Abstract

Deep neural networks have been shown vulnerable to adversarial patches, where exotic patterns can result in model’s wrong prediction. Nevertheless, existing approaches to adversarial patch generation hardly consider the contextual consistency between patches and the image background, causing such patches to be easily detected by human observation. Additionally, these methods require a large amount of data for training, which is computationally expensive. To overcome these challenges, we propose an approach to generate adversarial yet inconspicuous patches with one single image. In our approach, adversarial patches are produced in a coarse-to-fine way with multiple scales of generators and discriminators. The selection of patch location is based on the perceptual sensitivity of victim models. Contextual information is encoded during the Min-Max training to make patches consistent with surroundings.

Downloads

Published

2021-05-18

How to Cite

Luo, J., Bai, T., & Zhao, J. (2021). Generating Adversarial yet Inconspicuous Patches with a Single Image (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15837-15838. https://doi.org/10.1609/aaai.v35i18.17915

Issue

Section

AAAI Student Abstract and Poster Program