A Fusion-Denoising Attack on InstaHide with Data Augmentation

Authors

  • Xinjian Luo National University of Singapore
  • Xiaokui Xiao National University of Singapore
  • Yuncheng Wu National University of Singapore
  • Juncheng Liu National University of Singapore
  • Beng Chin Ooi National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v36i2.20084

Keywords:

Computer Vision (CV)

Abstract

InstaHide is a state-of-the-art mechanism for protecting private training images, by mixing multiple private images and modifying them such that their visual features are indistinguishable to the naked eye. In recent work, however, Carlini et al. show that it is possible to reconstruct private images from the encrypted dataset generated by InstaHide. Nevertheless, we demonstrate that Carlini et al.’s attack can be easily defeated by incorporating data augmentation into InstaHide. This leads to a natural question: is InstaHide with data augmentation secure? In this paper, we provide a negative answer to this question, by devising an attack for recovering private images from the outputs of InstaHide even when data augmentation is present. The basic idea is to use a comparative network to identify encrypted images that are likely to correspond to the same private image, and then employ a fusion-denoising network for restoring the private image from the encrypted ones, taking into account the effects of data augmentation. Extensive experiments demonstrate the effectiveness of the proposed attack in comparison to Carlini et al.’s attack.

Downloads

Published

2022-06-28

How to Cite

Luo, X., Xiao, X., Wu, Y., Liu, J., & Ooi, B. C. (2022). A Fusion-Denoising Attack on InstaHide with Data Augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1899-1907. https://doi.org/10.1609/aaai.v36i2.20084

Issue

Section

AAAI Technical Track on Computer Vision II