Consensus Learning with Multi-Party Perturbation Triggers for Secure Model Access

Authors

  • Yizhun Zhang Southeast University
  • Jie Huang Southeast University Purple Mountain Laboratories
  • Zeping Zhang Southeast University
  • Shuaishuai Zhang Southeast University
  • Changhao Ding Southeast University
  • Xuan Chen Southeast University

DOI:

https://doi.org/10.1609/aaai.v40i42.40925

Abstract

With the widespread deployment of deep learning models in multi-party collaborative scenarios, the issues of secure model access control and intellectual property (IP) protection have become increasingly critical. To address the limitations of existing methods that lack proactive defense mechanisms in such settings, this paper introduces a novel paradigm Consensus Learning which enables fine-grained control over model execution permissions via a multi-party joint authorization mechanism. Building on this, we propose the Collaborative Perturbation Trigger Method (CPTM), which allows participating parties to collaboratively generate perturbation-based trigger data that embed identity features. The model can only be activated using the collectively constructed trigger, enforcing tightly bound access control without modifying the model architecture. Extensive experiments on CIFAR-10, CIFAR-100, MNIST, and Face-LFW datasets demonstrate that the proposed method maintains prediction accuracy within 2% of the baseline unprotected models on authorized data. In contrast, under unauthorized or adversarial inputs, model accuracy drops below 10%, showcasing strong access control capabilities and robustness. This study offers a novel direction for building scalable, robust, and proactively protected deep learning models in multi-party collaborative environments.

Downloads

Published

2026-03-14

How to Cite

Zhang, Y., Huang, J., Zhang, Z., Zhang, S., Ding, C., & Chen, X. (2026). Consensus Learning with Multi-Party Perturbation Triggers for Secure Model Access. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 36084–36091. https://doi.org/10.1609/aaai.v40i42.40925

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI