TY - JOUR AU - Arora, Siddhant AU - Pruthi, Danish AU - Sadeh, Norman AU - Cohen, William W. AU - Lipton, Zachary C. AU - Neubig, Graham PY - 2022/06/28 Y2 - 2024/03/29 TI - Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 5 SE - AAAI Technical Track on Humans and AI DO - 10.1609/aaai.v36i5.20464 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20464 SP - 5277-5285 AB - In attempts to "explain" predictions of machine learning models, researchers have proposed hundreds of techniques for attributing predictions to features that are deemed important. While these attributions are often claimed to hold the potential to improve human "understanding" of the models, surprisingly little work explicitly evaluates progress towards this aspiration. In this paper, we conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews. They are challenged both to simulate the model on fresh reviews, and to edit reviews with the goal of lowering the probability of the originally predicted class. Successful manipulations would lead to an adversarial example. During the training (but not the test) phase, input spans are highlighted to communicate salience. Through our evaluation, we observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control. For the BERT-based classifier, popular local explanations do not improve their ability to reduce the model confidence over the no-explanation case. Remarkably, when the explanation for the BERT model is given by the (global) attributions of a linear model trained to imitate the BERT model, people can effectively manipulate the model. ER -