Reliability Matters: Exploring the Effect of AI Explanations on Misinformation Detection with a Warning
DOI:
https://doi.org/10.1609/icwsm.v18i1.31397Abstract
To mitigate misinformation on social media, platforms such as Facebook have offered warnings to users based on the detection results of AI systems. With the evolution of AI detection systems, efforts have been devoted to applying explainable AI (XAI) to further increase the transparency of AI decision-making. Nevertheless, few factors have been considered to understand the effectiveness of a warning with AI explanations in helping humans detect misinformation. In this study, we report the results of three online human-subject experiments (N = 2,692) investigating the framing effect and the impact of an AI system’s reliability on the effectiveness of AI warning with explanations. Our findings show that the framing effect is effective for participants’ misinformation detection, whereas the AI system’s reliability is critical for humans’ misinformation detection and participants’ trust in the AI system. However, adding the explanations can potentially increase participants’ suspicions on miss errors (i.e., false negatives) in the AI system. Furthermore, more trust is shown in the AI warning without explanations condition. We conclude by discussing the implications of our findings.Downloads
Published
2024-05-28
How to Cite
Seo, H., Lee, S., Lee, D., & Xiong, A. (2024). Reliability Matters: Exploring the Effect of AI Explanations on Misinformation Detection with a Warning. Proceedings of the International AAAI Conference on Web and Social Media, 18(1), 1395-1407. https://doi.org/10.1609/icwsm.v18i1.31397
Issue
Section
Full Papers