The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems
DOI:
https://doi.org/10.1609/hcomp.v7i1.5284Abstract
Machine learning and artificial intelligence algorithms can assist human decision making and analysis tasks. While such technology shows promise, willingness to use and rely on intelligent systems may depend on whether people can trust and understand them. To address this issue, researchers have explored the use of explainable interfaces that attempt to help explain why or how a system produced the output for a given input. However, the effects of meaningful and meaningless explanations (determined by their alignment with human logic) are not properly understood, especially with users who are non-experts in data science. Additionally, we wanted to explore how explanation inclusion and level of meaningfulness would affect the user’s perception of accuracy. We designed a controlled experiment using an image classification scenario with local explanations to evaluate and better understand these issues. Our results show that whether explanations are human-meaningful can significantly affect perception of a system’s accuracy independent of the actual accuracy observed from system usage. Participants significantly underestimated the system’s accuracy when it provided weak, less human-meaningful explanations. Therefore, for intelligent systems with explainable interfaces, this research demonstrates that users are less likely to accurately judge the accuracy of algorithms that do not operate based on human-understandable rationale.