SentinelLMs: Encrypted Input Adaptation and Fine-Tuning of Language Models for Private and Secure Inference
DOI:
https://doi.org/10.1609/aaai.v38i19.30136Keywords:
GeneralAbstract
This paper addresses the privacy and security concerns associated with deep neural language models, which serve as crucial components in various modern AI-based applications. These models are often used after being pre-trained and fine-tuned for specific tasks, with deployment on servers accessed through the internet. However, this introduces two fundamental risks: (a) the transmission of user inputs to the server via the network gives rise to interception vulnerabilities, and (b) privacy concerns emerge as organizations that deploy such models store user data with restricted context. To address this, we propose a novel method to adapt and fine-tune transformer-based language models on passkey-encrypted user-specific text. The original pre-trained language model first undergoes a quick adaptation (without any further pre-training) with a series of irreversible transformations applied to the tokenizer and token embeddings. This enables the model to perform inference on encrypted inputs while preventing reverse engineering of text from model parameters and intermediate outputs. After adaptation, models are fine-tuned on encrypted versions of existing training datasets. Experimental evaluation employing adapted versions of renowned models (e.g., BERT, RoBERTa) across established benchmark English and multilingual datasets for text classification and sequence labeling shows that encrypted models achieve performance parity with their original counterparts. This serves to safeguard performance, privacy, and security cohesively.Downloads
Published
2024-03-24
How to Cite
Mishra, A., Li, M., & Deo, S. (2024). SentinelLMs: Encrypted Input Adaptation and Fine-Tuning of Language Models for Private and Secure Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21403-21411. https://doi.org/10.1609/aaai.v38i19.30136
Issue
Section
AAAI Technical Track on Safe, Robust and Responsible AI Track