Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy

Authors

  • Donald R. Honeycutt University of Florida, Gainesville
  • Mahsan Nourani University of Florida, Gainesville
  • Eric D. Ragan University of Florida, Gainesville

DOI:

https://doi.org/10.1609/hcomp.v8i1.7464

Abstract

Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy. We present a controlled experiment using a simulated object detection system with image data to study the effects of interactive feedback collection on user impressions. The results show that providing human-in-the-loop feedback lowered both participants’ trust in the system and their perception of system accuracy, regardless of whether the system accuracy improved in response to their feedback. These results highlight the importance of considering the effects of allowing end-user feedback on user trust when designing intelligent systems.

Downloads

Published

2020-10-01

How to Cite

Honeycutt, D., Nourani, M., & Ragan, E. (2020). Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1), 63-72. https://doi.org/10.1609/hcomp.v8i1.7464