SecMoE: Communication-Efficient Secure MoE Inference via Select-Then-Compute

Authors

  • Bowen Shen School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Shenzhen, China Department of New Networks, Pengcheng Laboratory, Shenzhen, China
  • Yuyue Chen School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Shenzhen, China
  • Peng Yang School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Shenzhen, China
  • Bin Zhang Department of New Networks, Pengcheng Laboratory, Shenzhen, China
  • Xi Zhang College of Management and Economics, Tianjin University, Tianjin, China
  • Zoe L. Jiang School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Shenzhen, China Guangdong Key Laboratory of New Security and Intelligence Technology, Shenzhen, China

DOI:

https://doi.org/10.1609/aaai.v40i30.39721

Abstract

Privacy-preserving Transformer inference has gained attention due to the potential leakage of private information. Despite recent progress, existing frameworks still fall short of practical model scales, with gaps up to a hundredfold. A possible way to close this gap is the Mixture of Experts (MoE) architecture, which has emerged as a promising technique to scale up model capacity with minimal overhead. However, given that the current secure two-party (2-PC) protocols allow the server to homomorphically compute the FFN layer with its plaintext model weight, under the MoE setting, this could reveal which expert is activated to the server, exposing token-level privacy about the client's input. While naively evaluating all the experts before selection could protect privacy, it nullifies MoE sparsity and incurs the heavy computational overhead that sparse MoE seeks to avoid. To address the privacy and efficiency limitations above, we propose a 2-PC privacy-preserving inference framework, SecMoE. Unifying per-entry circuits in both the MoE layer and piecewise polynomial functions, SecMoE obliviously selects the extracted parameters from circuits and only computes one encrypted entry, which we refer to as Select-Then-Compute. This makes the model for private inference scale to 63× larger while only having a 15.2× increase in end-to-end runtime. Extensive experiments show that, under 5 expert settings, SecMoE lowers the end-to-end private inference communication by 1.8~7.1× and achieves 1.3~3.8× speedup compared to the state-of-the-art (SOTA) protocols.

Downloads

Published

2026-03-14

How to Cite

Shen, B., Chen, Y., Yang, P., Zhang, B., Zhang, X., & Jiang, Z. L. (2026). SecMoE: Communication-Efficient Secure MoE Inference via Select-Then-Compute. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25286–25294. https://doi.org/10.1609/aaai.v40i30.39721

Issue

Section

AAAI Technical Track on Machine Learning VII