Goten: GPU-Outsourcing Trusted Execution of Neural Network Training

Authors

  • Lucien K. L. Ng The Chinese University of Hong Kong
  • Sherman S. M. Chow The Chinese University of Hong Kong
  • Anna P. Y. Woo The Chinese University of Hong Kong
  • Donald P. H. Wong The Chinese University of Hong Kong
  • Yongjun Zhao Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v35i17.17746

Keywords:

Security and Privacy

Abstract

Deep learning unlocks applications with societal impacts, e.g., detecting child exploitation imagery and genomic analysis of rare diseases. Deployment, however, needs compliance with stringent privacy regulations. Training algorithms that preserve the privacy of training data are in pressing need. Purely cryptographic approaches can protect privacy, but they are still costly, even when they rely on two or more non-colluding servers. Seemingly-"trivial" operations in plaintext quickly become prohibitively inefficient when a series of them are "crypto-processed," e.g., (dynamic) quantization for ensuring the intermediate values would not overflow. Slalom, recently proposed by Tramer and Boneh, is the first solution that leverages both GPU (for efficient batch computation) and a trusted execution environment (TEE) (for minimizing the use of cryptography). Roughly, it works by a lot of pre-computation over known and fixed weights, and hence it only supports private inference. Five related problems for private training are left unaddressed. Goten, our privacy-preserving training and prediction framework, tackles all five problems simultaneously via our careful design over the "mismatched" cryptographic and GPU data types (due to the tension between precision and efficiency) and our round-optimal GPU-outsourcing protocol (hence minimizing the communication cost between servers). It 1) stochastically trains a low-bitwidth yet accurate model, 2) supports dynamic quantization (a challenge left by Slalom), 3) minimizes the memory-swapping overhead of the memory-limited TEE and its communication with GPU, 4) crypto-protects the (dynamic) model weight from untrusted GPU, and 5) outperforms a pure-TEE system, even without pre-computation (needed by Slalom). As a baseline, we build CaffeScone that secures Caffe using TEE but not GPU; Goten shows a 6.84x speed-up of the whole VGG-11. Goten also outperforms Falcon proposed by Wagh et al., the latest secure multi-server cryptographic solution, by 132.64x using VGG-11. Lastly, we demonstrate Goten's efficacy in training models for breast cancer diagnosis over sensitive images.

Downloads

Published

2021-05-18

How to Cite

Ng, L. K. L., Chow, S. S. M., Woo, A. P. Y., Wong, D. P. H., & Zhao, Y. (2021). Goten: GPU-Outsourcing Trusted Execution of Neural Network Training. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 14876-14883. https://doi.org/10.1609/aaai.v35i17.17746

Issue

Section

AAAI Special Track on AI for Social Impact