Taxonomizing and Measuring Representational Harms: A Look at Image Tagging

Authors

  • Jared Katzman University of Michigan
  • Angelina Wang Princeton University
  • Morgan Scheuerman University of Colorado Boulder
  • Su Lin Blodgett Microsoft Research
  • Kristen Laird Microsoft
  • Hanna Wallach Microsoft Research
  • Solon Barocas Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v37i12.26670

Keywords:

General

Abstract

In this paper, we examine computational approaches for measuring the "fairness" of image tagging systems, finding that they cluster into five distinct categories, each with its own analytic foundation. We also identify a range of normative concerns that are often collapsed under the terms "unfairness," "bias," or even "discrimination" when discussing problematic cases of image tagging. Specifically, we identify four types of representational harms that can be caused by image tagging systems, providing concrete examples of each. We then consider how different computational measurement approaches map to each of these types, demonstrating that there is not a one-to-one mapping. Our findings emphasize that no single measurement approach will be definitive and that it is not possible to infer from the use of a particular measurement approach which type of harm was intended to be measured. Lastly, equipped with this more granular understanding of the types of representational harms that can be caused by image tagging systems, we show that attempts to mitigate some of these types of harms may be in tension with one another.

Downloads

Published

2023-06-26

How to Cite

Katzman, J., Wang, A., Scheuerman, M., Blodgett, S. L., Laird, K., Wallach, H., & Barocas, S. (2023). Taxonomizing and Measuring Representational Harms: A Look at Image Tagging. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14277-14285. https://doi.org/10.1609/aaai.v37i12.26670

Issue

Section

AAAI Special Track on AI for Social Impact