Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content

Authors

  • Anubrata Das University of Texas at Austin
  • Brandon Dang University of Texas at Austin
  • Matthew Lease University of Texas at Austin

Abstract

While most user content posted on social media is benign, other content, such as violent or adult imagery, must be detected and blocked. Unfortunately, such detection is difficult to automate, due to high accuracy requirements, costs of errors, and nuanced rules for acceptable content. Consequently, social media platforms today rely on a vast workforce of human moderators. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to some moderators. To mitigate such harm, we investigate a set of blur-based moderation interfaces for reducing exposure to disturbing content whilst preserving moderator ability to quickly and accurately flag it. We report experiments with Mechanical Turk workers to measure moderator accuracy, speed, and emotional well-being across six alternative designs. Our key findings show interactive blurring designs can reduce emotional impact without sacrificing moderation accuracy and speed.

Downloads

Published

2020-10-01

How to Cite

Das, A., Dang, B., & Lease, M. (2020). Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1), 33-42. Retrieved from https://ojs.aaai.org/index.php/HCOMP/article/view/7461