PCFormer: Accelerating Privacy-preserving Transformer Inference by Partition and Combination

Authors

  • Bo Zeng Wuhan University
  • Zhi Pang Wuhan University
  • Yuyang Zhang Wuhan University
  • Kai Zhao Wuhan University
  • Tian Wu Nanchang University
  • Geying Yang Tianjin University
  • Lina Wang Wuhan University
  • Run Wang Wuhan University

DOI:

https://doi.org/10.1609/aaai.v40i33.40033

Abstract

In recent years, transformer-based models have achieved remarkable success in sensitive domains, including healthcare, finance and personalized services, but their deployment raises significant privacy concerns. Existing secure inference studies have introduced cryptographic techniques such as Homomorphic Encryption (HE) and Secure Multi-Party Computation (MPC). However, these approaches either target isolated model components or incur prohibitive computational and communication overheads, failing to support latency-sensitive or resource-limited environments. In our investigation, we identify substantial redundancy in the nonlinear operations and their alternation with linear layers in deep learning. Motivated by this observation, we propose PCFormer, a universal optimization methodology tailored for sequences of linear and nonlinear computations in the Transformer. PCFormer introduces structure-aware partition and combination techniques specially designed for Multi-Head Attention (MHA) and Feed-Forward Network (FFN). Specifically, we reveal the discrete sources of redundancy in the Softmax and GeLU functions during inference, implementing partitions at the token and channel levels, respectively. Subsequently, these reductions are then combined with the preceding and succeeding linear operations, thereby enhancing both computational and communication efficiency. Experimental results on GLUE benchmarks demonstrate that PCFormer achieves a 1.9× speedup in both computation and communication without compromising accuracy, compared to existing privacy-preserving Transformer frameworks. Furthermore, we demonstrate that PCFormer generalizes effectively to other deep learning architectures involving structured linear-nonlinear compositions under cryptographic constraints.

Downloads

Published

2026-03-14

How to Cite

Zeng, B., Pang, Z., Zhang, Y., Zhao, K., Wu, T., Yang, G., … Wang, R. (2026). PCFormer: Accelerating Privacy-preserving Transformer Inference by Partition and Combination. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 28076–28084. https://doi.org/10.1609/aaai.v40i33.40033

Issue

Section

AAAI Technical Track on Machine Learning X