“Be Careful; Things Can Be Worse than They Appear”: Understanding Biased Algorithms and Users’ Behavior Around Them in Rating Platforms

Authors

  • Motahhare Eslami University of Illinois Urbana-Champaign
  • Kristen Vaccaro University of Illinois Urbana-Champaign
  • Karrie Karahalios University of Illinois Urbana-Champaign
  • Kevin Hamilton University of Illinois Urbana-Champaign

Abstract

Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases? We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%). Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse-engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust. We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.

Downloads

Published

2017-05-03

How to Cite

Eslami, M., Vaccaro, K., Karahalios, K., & Hamilton, K. (2017). “Be Careful; Things Can Be Worse than They Appear”: Understanding Biased Algorithms and Users’ Behavior Around Them in Rating Platforms. Proceedings of the International AAAI Conference on Web and Social Media, 11(1), 62-71. Retrieved from https://ojs.aaai.org/index.php/ICWSM/article/view/14898