Toward Understanding the Influence of Individual Clients in Federated Learning

Authors

  • Yihao Xue Shanghai Jiao Tong University
  • Chaoyue Niu Shanghai Jiao Tong University
  • Zhenzhe Zheng Shanghai Jiao Tong University
  • Shaojie Tang University of Texas at Dallas
  • Chengfei Lyu Alibaba Group
  • Fan Wu Shanghai Jiao Tong University
  • Guihai Chen Shanghai Jiao Tong University

Keywords:

Distributed Machine Learning & Federated Learning

Abstract

Federated learning allows mobile clients to jointly train a global model without sending their private data to a central server. Extensive works have studied the performance guarantee of the global model, however, it is still unclear how each individual client influences the collaborative training process. In this work, we defined a new notion, called {\em Fed-Influence}, to quantify this influence over the model parameters, and proposed an effective and efficient algorithm to estimate this metric. In particular, our design satisfies several desirable properties: (1) it requires neither retraining nor retracing, adding only linear computational overhead to clients and the server; (2) it strictly maintains the tenets of federated learning, without revealing any client's local private data; and (3) it works well on both convex and non-convex loss functions, and does not require the final model to be optimal. Empirical results on a synthetic dataset and the FEMNIST dataset demonstrate that our estimation method can approximate Fed-Influence with small bias. Further, we show an application of Fed-Influence in model debugging.

Downloads

Published

2021-05-18

How to Cite

Xue, Y., Niu, C., Zheng, Z., Tang, S., Lyu, C., Wu, F., & Chen, G. (2021). Toward Understanding the Influence of Individual Clients in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10560-10567. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17263

Issue

Section

AAAI Technical Track on Machine Learning V