Multi-Dimensional Fair Federated Learning

Authors

  • Cong Su Shandong University
  • Guoxian Yu Shandong University
  • Jun Wang Shandong University
  • Hui Li Shandong University
  • Qingzhong Li Shandong University
  • Han Yu Nanyang Technological University (NTU)

DOI:

https://doi.org/10.1609/aaai.v38i13.29430

Keywords:

ML: Ethics, Bias, and Fairness, CSO: Constraint Optimization, ML: Distributed Machine Learning & Federated Learning

Abstract

Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data without compromising privacy. Group fairness and client fairness are two dimensions of fairness that are important for FL. Standard FL can result in disproportionate disadvantages for certain clients, and it still faces the challenge of treating different groups equitably in a population. The problem of privately training fair FL models without compromising the generalization capability of disadvantaged clients remains open. In this paper, we propose a method, called mFairFL, to address this problem and achieve group fairness and client fairness simultaneously. mFairFL leverages differential multipliers to construct an optimization objective for empirical risk minimization with fairness constraints. Before aggregating locally trained models, it first detects conflicts among their gradients, and then iteratively curates the direction and magnitude of gradients to mitigate these conflicts. Theoretical analysis proves mFairFL facilitates the fairness in model development. The experimental evaluations based on three benchmark datasets show significant advantages of mFairFL compared to seven state-of-the-art baselines.

Published

2024-03-24

How to Cite

Su, C., Yu, G., Wang, J., Li, H., Li, Q., & Yu, H. (2024). Multi-Dimensional Fair Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 15083-15090. https://doi.org/10.1609/aaai.v38i13.29430

Issue

Section

AAAI Technical Track on Machine Learning IV