Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks

Authors

  • Yaxin Xiao Department of Electronical and Electronic Engineering, The Hong Kong Polytechnic University
  • Qingqing Ye Department of Electronical and Electronic Engineering, The Hong Kong Polytechnic University
  • Zi Liang Department of Electronical and Electronic Engineering, The Hong Kong Polytechnic University
  • Haoyang Li Department of Electronical and Electronic Engineering, The Hong Kong Polytechnic University
  • RongHua Li Department of Electronical and Electronic Engineering, The Hong Kong Polytechnic University
  • Huadi Zheng Huawei Technologies Co., Ltd.
  • Haibo Hu Department of Electronical and Electronic Engineering, The Hong Kong Polytechnic University Research Centre for Privacy and Security Technologies in Future Smart Systems

DOI:

https://doi.org/10.1609/aaai.v40i42.40905

Abstract

Machine learning models constitute valuable intellectual property, yet remain vulnerable to model extraction attacks (MEA), where adversaries replicate their functionality through black-box queries. Model watermarking counters MEAs by embedding forensic markers for ownership verification. Current black-box watermarks prioritize MEA survival through representation entanglement, yet inadequately explore resilience against sequential MEAs and removal attacks. Our study reveals that this risk is underestimated because existing removal methods are weakened by entanglement. To address this gap, we propose Watermark Removal attacK (WRK), which circumvents entanglement constraints by exploiting decision boundaries shaped by prevailing sample-level watermark artifacts. WRK effectively reduces watermark success rates by ≥88.79% across existing watermarking benchmarks. For robust protection, we propose Class-Feature Watermarks (CFW), which improve resilience by leveraging class-level artifacts. CFW constructs a synthetic class using out-of-domain samples, eliminating vulnerable decision boundaries between original domain samples and their artifact-modified counterparts (watermark samples). CFW concurrently optimizes both MEA transferability and post-MEA stability. Experiments across multiple domains show that CFW consistently outperforms prior methods in resilience, maintaining a watermark success rate of ≥70.15% in extracted models even under the combined MEA and WRK distortion, while preserving the utility of protected models.

Downloads

Published

2026-03-14

How to Cite

Xiao, Y., Ye, Q., Liang, Z., Li, H., Li, R., Zheng, H., & Hu, H. (2026). Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 35903–35912. https://doi.org/10.1609/aaai.v40i42.40905

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI