Defending against Model Stealing via Verifying Embedded External Features
DOI:
https://doi.org/10.1609/aaai.v36i2.20036Keywords:
Computer Vision (CV), Machine Learning (ML), Philosophy And Ethics Of AI (PEAI)Abstract
Obtaining a well-trained model involves expensive data collection and training procedures, therefore the model is a valuable intellectual property. Recent studies revealed that adversaries can `steal' deployed models even when they have no training samples and can not get access to the model parameters or structures. Currently, there were some defense methods to alleviate this threat, mostly by increasing the cost of model stealing. In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified external features. Specifically, we embed the external features by tempering a few training samples with style transfer. We then train a meta-classifier to determine whether a model is stolen from the victim. This approach is inspired by the understanding that the stolen models should contain the knowledge of features learned by the victim model. We examine our method on both CIFAR-10 and ImageNet datasets. Experimental results demonstrate that our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process. The codes for reproducing main results are available at Github (https://github.com/zlh-thu/StealingVerification).Downloads
Published
2022-06-28
How to Cite
Li, Y., Zhu, L., Jia, X., Jiang, Y., Xia, S.-T., & Cao, X. (2022). Defending against Model Stealing via Verifying Embedded External Features. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1464-1472. https://doi.org/10.1609/aaai.v36i2.20036
Issue
Section
AAAI Technical Track on Computer Vision II