LLA: Enhancing Security and Privacy for Generative Models with Logic-Locked Accelerators

Authors

  • You Li Northwestern University
  • Guannan Zhao Northwestern University
  • Yuhao Ju Northwestern University
  • Yunqi He Northwestern University
  • Jie Gu Northwestern University
  • Hai Zhou Northwestern University

DOI:

https://doi.org/10.1609/aaai.v40i28.39496

Abstract

We introduce LLA, an effective intellectual property (IP) protection scheme for generative AI models. LLA leverages the synergy between hardware and software to defend against various supply chain threats, including model theft, model corruption, and information leakage. On the software side, it embeds key bits into neurons that can trigger outliers to degrade performance and applies invariance transformations to obscure the key values. On the hardware side, it integrates a lightweight locking module into the AI accelerator while maintaining compatibility with various dataflow patterns and toolchains. An accelerator with a pre-stored secret key acts as a license to access the model services provided by the IP owner. The evaluation results show that LLA can withstand a broad range of oracle-guided key optimization attacks, while incurring a minimal computational overhead of less than 0.1% for 7,168 key bits.

Published

2026-03-14

How to Cite

Li, Y., Zhao, G., Ju, Y., He, Y., Gu, J., & Zhou, H. (2026). LLA: Enhancing Security and Privacy for Generative Models with Logic-Locked Accelerators. Proceedings of the AAAI Conference on Artificial Intelligence, 40(28), 23274–23282. https://doi.org/10.1609/aaai.v40i28.39496

Issue

Section

AAAI Technical Track on Machine Learning V