Fairness by “Where”: A Statistically-Robust and Model-Agnostic Bi-level Learning Framework


  • Yiqun Xie The University of Maryland
  • Erhu He University of Pittsburgh
  • Xiaowei Jia University of Pittsburgh
  • Weiye Chen University of Maryland, College Park
  • Sergii Skakun University of Maryland
  • Han Bao University of Iowa
  • Zhe Jiang University of Florida
  • Rahul Ghosh University of Minnesota
  • Praveen Ravirathinam University of Minnasota




AI For Social Impact (AISI Track Papers Only), Philosophy And Ethics Of AI (PEAI), Computer Vision (CV), Humans And AI (HAI)


Fairness related to locations (i.e., "where") is critical for the use of machine learning in a variety of societal domains involving spatial datasets (e.g., agriculture, disaster response, urban planning). Spatial biases incurred by learning, if left unattended, may cause or exacerbate unfair distribution of resources, social division, spatial disparity, etc. The goal of this work is to develop statistically-robust formulations and model-agnostic learning strategies to understand and promote spatial fairness. The problem is challenging as locations are often from continuous spaces with no well-defined categories (e.g., gender), and statistical conclusions from spatial data are fragile to changes in spatial partitionings and scales. Existing studies in fairness-driven learning have generated valuable insights related to non-spatial factors including race, gender, education level, etc., but research to mitigate location-related biases still remains in its infancy, leaving the main challenges unaddressed. To bridge the gap, we first propose a robust space-as-distribution (SPAD) representation of spatial fairness to reduce statistical sensitivity related to partitioning and scales in continuous space. Furthermore, we propose a new SPAD-based stochastic strategy to efficiently optimize over an extensive distribution of fairness criteria, and a bi-level training framework to enforce fairness via adaptive adjustment of priorities among locations. Experiments on real-world crop monitoring show that SPAD can effectively reduce sensitivity in fairness evaluation and the stochastic bi-level training framework can greatly improve the fairness.




How to Cite

Xie, Y., He, E., Jia, X., Chen, W., Skakun, S., Bao, H., Jiang, Z., Ghosh, R., & Ravirathinam, P. (2022). Fairness by “Where”: A Statistically-Robust and Model-Agnostic Bi-level Learning Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12208-12216. https://doi.org/10.1609/aaai.v36i11.21481