Explaining Multimodal Deceptive News Prediction Models

Authors

  • Svitlana Volkova Data Sciences and Analytics Group, Visual Analytics Group
  • Ellyn Ayton Data Sciences and Analytics Group, Visual Analytics Group
  • Dustin L. Arendt National Security Directorate, Pacific Northwest National Laboratory
  • Zhuanyi Huang National Security Directorate, Pacific Northwest National Laboratory
  • Brian Hutchinson Data Sciences and Analytics Group, Visual Analytics Group

DOI:

https://doi.org/10.1609/icwsm.v13i01.3266

Abstract

In this study we present in-depth quantitative and qualitative analyses of the behavior of multimodal deceptive news classification models. We present several neural network architectures trained on thousands of tweets that leverage combinations of text, lexical, and, most importantly, image input signals. The behavior of these models is analyzed across four deceptive news prediction tasks. Our quantitative analysis reveals that text only models outperform those leveraging only the image signals (by 3-13% absolute in F-measure). Neural network models that combine image and text signals with lexical features e.g., biased and subjective language markers perform even better e.g., F-measure is as high as 0.74 for binary classification setup for distinguishing between verified and deceptive content identified as disinformation and propaganda. Our qualitative analysis of model performance, that goes beyond the F-score, performed using a novel interactive tool ERRFILTER1 allows a user to characterize text and image traits of suspicious news content and analyze patterns of errors made by the various models, which in turn will inform the design of future deceptive news prediction models.

Downloads

Published

2019-07-06

How to Cite

Volkova, S., Ayton, E., Arendt, D. L., Huang, Z., & Hutchinson, B. (2019). Explaining Multimodal Deceptive News Prediction Models. Proceedings of the International AAAI Conference on Web and Social Media, 13(01), 659-662. https://doi.org/10.1609/icwsm.v13i01.3266