An Improved Privacy and Utility Analysis of Differentially Private SGD with Bounded Domain and Smooth Losses

Authors

  • Hao Liang The Hong Kong University of Science and Technology (Guangzhou)
  • Wanrong Zhang Harvard University
  • Xinlei He The Hong Kong University of Science and Technology (Guangzhou)
  • Kaishun Wu The Hong Kong University of Science and Technology (Guangzhou)
  • Hong Xing The Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v40i28.39510

Abstract

Differentially Private Stochastic Gradient Descent (DPSGD) is widely used to protect sensitive data during the training of machine learning models, but its privacy guarantee often comes at a large cost of model performance due to the lack of tight theoretical bounds quantifying privacy loss. While recent efforts have achieved more accurate privacy guarantees, they still impose some assumptions prohibited from practical applications, such as convexity and complex parameter requirements, and rarely investigate in-depth the impact of privacy mechanisms on the model's utility. In this paper, we provide a rigorous privacy characterization for DPSGD with general L-smooth and non-convex loss functions, revealing converged privacy loss with iteration in bounded-domain cases. Specifically, we track the privacy loss over multiple iterations, leveraging the noisy smooth-reduction property, and further establish comprehensive convergence analysis in different scenarios. In particular, we show that for DPSGD with a bounded domain, (i) the privacy loss can still converge without the convexity assumption, (ii) a smaller bounded diameter can improve both privacy and utility simultaneously under certain conditions, and (iii) the attainable big-O order of the privacy utility trade-off for DPSGD with gradient clipping (DPSGD-GC) and for DPSGD-GC with bounded domain (DPSGD-DC) and strongly convex population risk function, respectively. Experiments via membership inference attack (MIA) in a practical setting validate insights gained from the theoretical results.

Downloads

Published

2026-03-14

How to Cite

Liang, H., Zhang, W., He, X., Wu, K., & Xing, H. (2026). An Improved Privacy and Utility Analysis of Differentially Private SGD with Bounded Domain and Smooth Losses. Proceedings of the AAAI Conference on Artificial Intelligence, 40(28), 23401–23408. https://doi.org/10.1609/aaai.v40i28.39510

Issue

Section

AAAI Technical Track on Machine Learning V