Evaluating Deep Taylor Decomposition for Reliability Assessment in the Wild
Keywords:Trust; reputation; recommendation systems, Text categorization; topic recognition; demographic/gender/age identification, Credibility of online content
AbstractWe argue that we need to evaluate model interpretability methods 'in the wild', i.e., in situations where professionals make critical decisions, and models can potentially assist them. We present an in-the-wild evaluation of token attribution based on Deep Taylor Decomposition, with professional journalists performing reliability assessments. We find that using this method in conjunction with RoBERTa-Large, fine-tuned on the Gossip Corpus, led to faster and better human decision-making, as well as a more critical attitude toward news sources among the journalists. We present a comparison of human and model rationales, as well as a qualitative analysis of the journalists' experiences with machine-in-the-loop decision making.
How to Cite
Brandl, S., Hershcovich, D., & Søgaard, A. (2022). Evaluating Deep Taylor Decomposition for Reliability Assessment in the Wild. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 1368-1372. https://doi.org/10.1609/icwsm.v16i1.19389