Differentially Private and Communication Efficient Collaborative Learning

Authors

  • Jiahao Ding University of Houston
  • Guannan Liang University of Connecticut
  • Jinbo Bi University of Connecticut
  • Miao Pan University of Houston

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy, Distributed Machine Learning & Federated Learning, Learning on the Edge & Model Compression, Classification and Regression

Abstract

Collaborative learning has received huge interests due to its capability of exploiting the collective computing power of the wireless edge devices. However, during the learning process, model updates using local private samples and large-scale parameter exchanges among agents impose severe privacy concerns and communication bottleneck. In this paper, to address these problems, we propose two differentially private (DP) and communication efficient algorithms, called Q-DPSGD-1 and Q-DPSGD-2. In Q-DPSGD-1, each agent first performs local model updates by a DP gradient descent method to provide the DP guarantee and then quantizes the local model before transmitting it to neighbors to improve communication efficiency. In Q-DPSGD-2, each agent injects discrete Gaussian noise to enforce DP guarantee after first quantizing the local model. Moreover, we track the privacy loss of both approaches under the Renyi DP and provide convergence analysis for both convex and non-convex loss functions. The proposed methods are evaluated in extensive experiments on real-world datasets and the empirical results validate our theoretical findings.

Downloads

Published

2021-05-18

How to Cite

Ding, J., Liang, G., Bi, J., & Pan, M. (2021). Differentially Private and Communication Efficient Collaborative Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7219-7227. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16887

Issue

Section

AAAI Technical Track on Machine Learning I