OursFed: Provable Group Fairness-Aware Federated Learning Against Distrust and Fragility

Authors

  • Yun Xin School of Computer Science and Technology, Wuhan University of Science and Technology, China
  • Jianfeng Lu Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, China Key Laboratory of Social Computing and Cognitive Intelligence (Dalian University of Technology), Ministry of Education, China
  • Gang Li College of Computer Science, Inner Mongolia University, China
  • Shuqin Cao Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, China
  • Guanghui Wen School of Automation, Southeast University, China
  • Kehao Wang School of Information Engineering, Wuhan University of Technology, China

DOI:

https://doi.org/10.1609/aaai.v40i32.39926

Abstract

With the increasing application of high-stakes decisionmaking application in Federated Learning (FL), ensuring fairness across different populations to prevent biases against certain groups has become crucial. However, achieving group fairness (GF) in FL presents a formidable challenge due to its decentralization, which complicates the global GF estimation by the server. Moreover, distrust and fragility hinder the server from gathering GF values from unreliable clients. This challenge motivates our proposal of OursFed, a provable GF-aware FL framework that integrates a privacy pairbased contract and robust GF estimation method to address issues of distrust and fragility. Methodologically, we categorize client unreliability into two categories: active unreliability stemming from distrust and passive unreliability arising from fragility. To mitigate active unreliability, we design a privacy pair-based contract to guarantee truthful GF reporting, and enhance multivariate analysis by identifying relationships among multiple private data. To counteract passive unreliability, we develop a robust GF estimation using non-parametric techniques to smooth data and estimate probability densities and regression functions, improving per-client GF accuracy under multi-dimensional data perturbation. Theoretically, we demonstrate the efficacy of OursFed by analyzing its convergence, GF stability, and accuracy deviation. Experimentally, evaluations on two real datasets show that OursFed improves GF by 28.61% with at most 2.7% trade-off versus state-ofthe-art baselines, and synthetic experiments further confirm its effectiveness in handling fragility and distrust.

Downloads

Published

2026-03-14

How to Cite

Xin, Y., Lu, J., Li, G., Cao, S., Wen, G., & Wang, K. (2026). OursFed: Provable Group Fairness-Aware Federated Learning Against Distrust and Fragility. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27117–27125. https://doi.org/10.1609/aaai.v40i32.39926

Issue

Section

AAAI Technical Track on Machine Learning IX