FedMDFG: Federated Learning with Multi-Gradient Descent and Fair Guidance

Authors

  • Zibin Pan The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen The Shenzhen Institute of Artificial Intelligence and Robotics for Society
  • Shuyi Wang The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen The Shenzhen Institute of Artificial Intelligence and Robotics for Society
  • Chi Li The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen
  • Haijin Wang The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen
  • Xiaoying Tang The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen The Shenzhen Institute of Artificial Intelligence and Robotics for Society The Guangdong Provincial Key Laboratory of Future Networks of Intelligence
  • Junhua Zhao The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen The Shenzhen Institute of Artificial Intelligence and Robotics for Society

DOI:

https://doi.org/10.1609/aaai.v37i8.26122

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Applications, ML: Bias and Fairness, PEAI: Bias, Fairness & Equity

Abstract

Fairness has been considered as a critical problem in federated learning (FL). In this work, we analyze two direct causes of unfairness in FL - an unfair direction and an improper step size when updating the model. To solve these issues, we introduce an effective way to measure fairness of the model through the cosine similarity, and then propose a federated multiple gradient descent algorithm with fair guidance (FedMDFG) to drive the model fairer. We first convert FL into a multi-objective optimization problem (MOP) and design an advanced multiple gradient descent algorithm to calculate a fair descent direction by adding a fair-driven objective to MOP. A low-communication-cost line search strategy is then designed to find a better step size for the model update. We further show the theoretical analysis on how it can enhance fairness and guarantee the convergence. Finally, extensive experiments in several FL scenarios verify that FedMDFG is robust and outperforms the SOTA FL algorithms in convergence and fairness. The source code is available at https://github.com/zibinpan/FedMDFG.

Downloads

Published

2023-06-26

How to Cite

Pan, Z., Wang, S., Li, C., Wang, H., Tang, X., & Zhao, J. (2023). FedMDFG: Federated Learning with Multi-Gradient Descent and Fair Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9364-9371. https://doi.org/10.1609/aaai.v37i8.26122

Issue

Section

AAAI Technical Track on Machine Learning III