Probably Approximate Shapley Fairness with Applications in Machine Learning


  • Zijian Zhou National University of Singapore
  • Xinyi Xu National University of Singapore Institute for Infocomm Research, A*STAR
  • Rachael Hwee Ling Sim National University of Singapore
  • Chuan Sheng Foo Institute for Infocomm Research, A*STAR Centre for Frontier AI Research, A*STAR
  • Bryan Kian Hsiang Low National University of Singapore



GTEP: Applications, ML: Evaluation and Analysis (Machine Learning)


The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.




How to Cite

Zhou, Z., Xu, X., Sim, R. H. L., Foo, C. S., & Low, B. K. H. (2023). Probably Approximate Shapley Fairness with Applications in Machine Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 5910-5918.



AAAI Technical Track on Game Theory and Economic Paradigms