Differentially Private Learning with Per-Sample Adaptive Clipping

Authors

  • Tianyu Xia School of Software & Microelectronics, Peking University
  • Shuheng Shen Tiansuan Lab, Ant Group
  • Su Yao Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University
  • Xinyi Fu Tiansuan Lab, Ant Group
  • Ke Xu Department of Computer Science & Technology, Tsinghua University Zhongguancun Laboratory, Beijing
  • Xiaolong Xu Tiansuan Lab, Ant Group
  • Xing Fu Tiansuan Lab, Ant Group

DOI:

https://doi.org/10.1609/aaai.v37i9.26242

Keywords:

ML: Privacy-Aware ML, CV: Bias, Fairness & Privacy, ML: Optimization, PEAI: Privacy and Security

Abstract

Privacy in AI remains a topic that draws attention from researchers and the general public in recent years. As one way to implement privacy-preserving AI, differentially private learning is a framework that enables AI models to use differential privacy (DP). To achieve DP in the learning process, existing algorithms typically limit the magnitude of gradients with a constant clipping, which requires carefully tuned due to its significant impact on model performance. As a solution to this issue, latest works NSGD and Auto-S innovatively propose to use normalization instead of clipping to avoid hyperparameter tuning. However, normalization-based approaches like NSGD and Auto-S rely on a monotonic weight function, which imposes excessive weight on small gradient samples and introduces extra deviation to the update. In this paper, we propose a Differentially Private Per-Sample Adaptive Clipping (DP-PSAC) algorithm based on a non-monotonic adaptive weight function, which guarantees privacy without the typical hyperparameter tuning process of using a constant clipping while significantly reducing the deviation between the update and true batch-averaged gradient. We provide a rigorous theoretical convergence analysis and show that with convergence rate at the same order, the proposed algorithm achieves a lower non-vanishing bound, which is maintained over training iterations, compared with NSGD/Auto-S. In addition, through extensive experimental evaluation, we show that DP-PSAC outperforms or matches the state-of-the-art methods on multiple main-stream vision and language tasks.

Downloads

Published

2023-06-26

How to Cite

Xia, T., Shen, S., Yao, S., Fu, X., Xu, K., Xu, X., & Fu, X. (2023). Differentially Private Learning with Per-Sample Adaptive Clipping. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10444-10452. https://doi.org/10.1609/aaai.v37i9.26242

Issue

Section

AAAI Technical Track on Machine Learning IV