Scalable Privacy-Preserving Neural Network Training over Z2k via RMFE-Based Packing and Mixed-Circuit Computation

Authors

  • Hengcheng Zhou Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v40i34.40132

Abstract

We introduce a novel framework for privacy-preserving multi-party neural network training over ℤ_(2^k) with semi-honest security in the honest-majority setting. Our work utilizes Shamir secret sharing scheme over Galois rings GR(2^k, d) and is scalable in the number of participants. Our primary contribution is a generalization of existing data packing techniques used in private training through Reverse Multiplication-Friendly Embedding (RMFE), which enables a higher packing density and thus more efficient SIMD-style parallel computation. Notably, our work is the first to support a general form of RMFE, lifting a common restriction from previous approaches. To holistically optimize the training process, we further integrate mixed-circuit techniques to be fully compatible with our RMFE-based packing scheme. This enables our protocol to efficiently compute nonlinear functions, such as comparison, by leveraging bit-wise computations over GR(2, d). We consolidate these advances into an end-to-end parallel training framework. Experimental results on both fully connected and convolutional neural networks validate the practical performance advantages of our framework compared to existing methods.

Downloads

Published

2026-03-14

How to Cite

Zhou, H. (2026). Scalable Privacy-Preserving Neural Network Training over Z2k via RMFE-Based Packing and Mixed-Circuit Computation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(34), 28964–28972. https://doi.org/10.1609/aaai.v40i34.40132

Issue

Section

AAAI Technical Track on Machine Learning XI