MERGE: Fast Private Text Generation
DOI:
https://doi.org/10.1609/aaai.v38i18.29964Keywords:
PEAI: Privacy & Security, NLP: (Large) Language Models, NLP: GenerationAbstract
The drastic increase in language models' parameters has led to a new trend of deploying models in cloud servers, raising growing concerns about private inference for Transformer-based models. Existing two-party privacy-preserving techniques, however, only take into account natural language understanding (NLU) scenarios. Private inference in natural language generation (NLG), crucial for applications like translation and code completion, remains underexplored. In addition, previous privacy-preserving techniques suffer from convergence issues during model training and exhibit poor inference speed when used with NLG models due to the neglect of time-consuming operations in auto-regressive generations. To address these issues, we propose a fast private text generation framework for Transformer-based language models, namely MERGE. MERGE reuses the output hidden state as the word embedding to bypass the embedding computation and reorganize the linear operations in the Transformer module to accelerate the forward procedure. Extensive experiments show that MERGE achieves a 26.5x speedup to the vanilla encrypted model under the sequence length 512, and reduces 80% communication cost, with an up to 10x speedup to state-of-the-art approximated models.Downloads
Published
2024-03-24
How to Cite
Liang, Z., Wang, P., Zhang, R., Xu, N., Zhang, S., Xing, L., Bai, H., & Zhou, Z. (2024). MERGE: Fast Private Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 19884-19892. https://doi.org/10.1609/aaai.v38i18.29964
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI