Trustworthy Social Bias Measurement

Authors

  • Rishi Bommasani Stanford University
  • Percy Liang Stanford University

DOI:

https://doi.org/10.1609/aies.v7i1.31630

Abstract

How do we design measures of social bias that we trust? While prior work has introduced several measures, no measure has gained widespread trust: instead, mounting evidence argues we should distrust these measures. In this work, we design bias measures that warrant trust based on the cross-disciplinary theory of measurement modeling. To combat the frequently fuzzy treatment of social bias in natural language processing, we explicitly define social bias, grounded in principles drawn from social science research. We operationalize our definition by proposing a general bias measurement framework DivDist, which we use to instantiate 5 concrete bias measures. To validate our measures, we propose a rigorous testing protocol with 8 testing criteria (e.g. predictive validity: do measures predict biases in US employment?). Through our testing, we demonstrate considerable evidence to trust our measures, showing they overcome conceptual, technical, and empirical deficiencies present in prior measures.

Downloads

Published

2024-10-16

How to Cite

Bommasani, R., & Liang, P. (2024). Trustworthy Social Bias Measurement. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 210-224. https://doi.org/10.1609/aies.v7i1.31630