Performative Federated Learning: A Solution to Model-Dependent and Heterogeneous Distribution Shifts
DOI:
https://doi.org/10.1609/aaai.v38i11.29191Keywords:
ML: Adversarial Learning & Robustness, GTEP: Other Foundations of Game Theory & Economic Paradigms, ML: Distributed Machine Learning & Federated Learning, MAS: Multiagent LearningAbstract
We consider a federated learning (FL) system consisting of multiple clients and a server, where the clients aim to collaboratively learn a common decision model from their distributed data. Unlike the conventional FL framework that assumes the client's data is static, we consider scenarios where the clients' data distributions may be reshaped by the deployed decision model. In this work, we leverage the idea of distribution shift mappings in performative prediction to formalize this model-dependent data distribution shift and propose a performative FL framework. We first introduce necessary and sufficient conditions for the existence of a unique performative stable solution and characterize its distance to the performative optimal solution. Then we propose the performative FedAvg algorithm and show that it converges to the performative stable solution at a rate of O(1/T) under both full and partial participation schemes. In particular, we use novel proof techniques and show how the clients' heterogeneity influences the convergence. Numerical results validate our analysis and provide valuable insights into real-world applications.Downloads
Published
2024-03-24
How to Cite
Jin, K., Yin, T., Chen, Z., Sun, Z., Zhang, X., Liu, Y., & Liu, M. (2024). Performative Federated Learning: A Solution to Model-Dependent and Heterogeneous Distribution Shifts. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12938-12946. https://doi.org/10.1609/aaai.v38i11.29191
Issue
Section
AAAI Technical Track on Machine Learning II