Probably Approximate Shapley Fairness with Applications in Machine Learning
DOI:
https://doi.org/10.1609/aaai.v37i5.25732Keywords:
GTEP: Applications, ML: Evaluation and Analysis (Machine Learning)Abstract
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.Downloads
Published
2023-06-26
How to Cite
Zhou, Z., Xu, X., Sim, R. H. L., Foo, C. S., & Low, B. K. H. (2023). Probably Approximate Shapley Fairness with Applications in Machine Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 5910-5918. https://doi.org/10.1609/aaai.v37i5.25732
Issue
Section
AAAI Technical Track on Game Theory and Economic Paradigms