Privacy Leaks by Adversaries: Adversarial Iterations for Membership Inference Attack
DOI:
https://doi.org/10.1609/aaai.v40i42.40912Abstract
Membership inference attack (MIA) has become one of the most widely used and effective methods for evaluating the privacy risks of machine learning models. This attack aims to determine whether a specific sample is part of the model's training set by analyzing the model's output. While traditional membership inference attacks focus on leveraging the model’s posterior output, such as confidence on the target sample, we propose IMIA, a novel attack strategy that utilizes the process of generating adversarial samples to infer membership. We propose to infer the member properties of the target sample using the number of iterations required to generate its adversarial sample. We conduct experiments across multiple models and datasets, and our results demonstrate that the number of iterations for generating an adversarial sample is a reliable feature for membership inference, achieving strong performance both in black-box and white-box attack scenarios. This work provides a new perspective for evaluating model privacy and highlights the potential of adversarial example-based features for privacy leakage assessment.Downloads
Published
2026-03-14
How to Cite
Xue, J., Sun, Z., Ye, H., Luo, L., Chang, X., & Dai, G. (2026). Privacy Leaks by Adversaries: Adversarial Iterations for Membership Inference Attack. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 35967–35975. https://doi.org/10.1609/aaai.v40i42.40912
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI