FairFed: Enabling Group Fairness in Federated Learning
DOI:
https://doi.org/10.1609/aaai.v37i6.25911Keywords:
ML: Distributed Machine Learning & Federated Learning, ML: Bias and Fairness, PEAI: Bias, Fairness & EquityAbstract
Training ML models which are fair across different demographic groups is of critical importance due to the increased integration of ML in crucial decision-making scenarios such as healthcare and recruitment. Federated learning has been viewed as a promising solution for collaboratively training machine learning models among multiple parties while maintaining their local data privacy. However, federated learning also poses new challenges in mitigating the potential bias against certain populations (e.g., demographic groups), as this typically requires centralized access to the sensitive information (e.g., race, gender) of each datapoint. Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm for fairness-aware aggregation to enhance group fairness in federated learning. Our proposed approach is server-side and agnostic to the applied local debiasing thus allowing for flexible use of different local debiasing methods across clients. We evaluate FairFed empirically versus common baselines for fair ML and federated learning and demonstrate that it provides fairer models, particularly under highly heterogeneous data distributions across clients. We also demonstrate the benefits of FairFed in scenarios involving naturally distributed real-life data collected from different geographical locations or departments within an organization.Downloads
Published
2023-06-26
How to Cite
Ezzeldin, Y. H., Yan, S., He, C., Ferrara, E., & Avestimehr, A. S. (2023). FairFed: Enabling Group Fairness in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7494-7502. https://doi.org/10.1609/aaai.v37i6.25911
Issue
Section
AAAI Technical Track on Machine Learning I