IPRemover: A Generative Model Inversion Attack against Deep Neural Network Fingerprinting and Watermarking
DOI:
https://doi.org/10.1609/aaai.v38i7.28619Keywords:
CV: Adversarial Attacks & Robustness, CV: Computational Photography, Image & Video Synthesis, CV: Object Detection & CategorizationAbstract
Training Deep Neural Networks (DNNs) can be expensive when data is difficult to obtain or labeling them requires significant domain expertise. Hence, it is crucial that the Intellectual Property (IP) of DNNs trained on valuable data be protected against IP infringement. DNN fingerprinting and watermarking are two lines of work in DNN IP protection. Recently proposed DNN fingerprinting techniques are able to detect IP infringement while preserving model performance by relying on the key assumption that the decision boundaries of independently trained models are intrinsically different from one another. In contrast, DNN watermarking embeds a watermark in a model and verifies IP infringement if an identical or similar watermark is extracted from a suspect model. The techniques deployed in fingerprinting and watermarking vary significantly because their underlying mechanisms are different. From an adversary's perspective, a successful IP removal attack should defeat both fingerprinting and watermarking. However, to the best of our knowledge, there is no work on such attacks in the literature yet. In this paper, we fill this gap by presenting an IP removal attack that can defeat both fingerprinting and watermarking. We consider the challenging data-free scenario whereby all data is inverted from the victim model. Under this setting, a stolen model only depends on the victim model. Experimental results demonstrate the success of our attack in defeating state-of-the-art DNN fingerprinting and watermarking techniques. This work reveals a novel attack surface that exploits generative model inversion attacks to bypass DNN IP defenses. This threat must be addressed by future defenses for reliable IP protection.Downloads
Published
2024-03-24
How to Cite
Zong, W., Chow, Y.-W., Susilo, W., Baek, J. ., Kim, J., & Camtepe, S. (2024). IPRemover: A Generative Model Inversion Attack against Deep Neural Network Fingerprinting and Watermarking. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7837-7845. https://doi.org/10.1609/aaai.v38i7.28619
Issue
Section
AAAI Technical Track on Computer Vision VI