How Does Data Augmentation Affect Privacy in Machine Learning?

Authors

  • Da Yu Sun Yat-sen University
  • Huishuai Zhang Microsoft Research
  • Wei Chen Microsoft Research
  • Jian Yin Sun Yat-Sen University
  • Tie-Yan Liu Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v35i12.17284

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy, Privacy & Security

Abstract

It is observed in the literature that data augmentation can significantly mitigate membership inference (MI) attack. However, in this work, we challenge this observation by proposing new MI attacks to utilize the information of augmented data. MI attack is widely used to measure the model's information leakage of the training set. We establish the optimal membership inference when the model is trained with augmented data, which inspires us to formulate the MI attack as a set classification problem, i.e., classifying a set of augmented instances instead of a single data point, and design input permutation invariant features. Empirically, we demonstrate that the proposed approach universally outperforms original methods when the model is trained with data augmentation. Even further, we show that the proposed approach can achieve higher MI attack success rates on models trained with some data augmentation than the existing methods on models trained without data augmentation. Notably, we achieve a 70.1\% MI attack success rate on CIFAR10 against a wide residual network while the previous best approach only attains 61.9\%. This suggests the privacy risk of models trained with data augmentation could be largely underestimated.

Downloads

Published

2021-05-18

How to Cite

Yu, D., Zhang, H., Chen, W., Yin, J., & Liu, T.-Y. (2021). How Does Data Augmentation Affect Privacy in Machine Learning?. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10746-10753. https://doi.org/10.1609/aaai.v35i12.17284

Issue

Section

AAAI Technical Track on Machine Learning V