CrowdLens: Experimenting with Crowd-Powered Recommendation and Explanation

Authors

  • Shuo Chang University of Minnesota
  • F. Harper University of Minnesota
  • Lingfei He University of Minnesota
  • Loren Terveen University of Minnesota

DOI:

https://doi.org/10.1609/icwsm.v10i1.14743

Abstract

Recommender systems face several challenges, e.g., recommending novel and diverse items and generating helpful explanations. Where algorithms struggle, people may excel. We therefore designed CrowdLens to explore different workflows for incorporating people into the recommendation process. We did an online experiment, finding that: compared to a state-of-the-art algorithm, crowdsourcing workflows produced more diverse and novel recommendations favored by human judges;some crowdworkers produced high-quality explanations for their recommendations, and we created an accurate model for identifying high-quality explanations;volunteers from an online community generally performed better than paid crowdworkers, but appropriate algorithmic support erased this gap. We conclude by reflecting on lessons of our work for those considering a crowdsourcing approach and identifying several fundamental issues for future work.

Downloads

Published

2021-08-04

How to Cite

Chang, S., Harper, F., He, L., & Terveen, L. (2021). CrowdLens: Experimenting with Crowd-Powered Recommendation and Explanation. Proceedings of the International AAAI Conference on Web and Social Media, 10(1), 52-61. https://doi.org/10.1609/icwsm.v10i1.14743