An Interpretable Joint Graphical Model for Fact-Checking From Crowds

Authors

  • An Nguyen University of Texas at Austin
  • Aditya Kharosekar University of Texas at Austin
  • Matthew Lease University of Texas at Austin
  • Byron Wallace Northeastern University

DOI:

https://doi.org/10.1609/aaai.v32i1.11487

Keywords:

grahphical models, variational inference, crowdsourcing, natural language processing

Abstract

Assessing the veracity of claims made on the Internet is an important, challenging, and timely problem. While automated fact-checking models have potential to help people better assess what they read, we argue such models must be explainable, accurate, and fast to be useful in practice; while prediction accuracy is clearly important, model transparency is critical in order for users to trust the system and integrate their own knowledge with model predictions. To achieve this, we propose a novel probabilistic graphical model (PGM) which combines machine learning with crowd annotations. Nodes in our model correspond to claim veracity, article stance regarding claims, reputation of news sources, and annotator reliabilities. We introduce a fast variational method for parameter estimation. Evaluation across two real-world datasets and three scenarios shows that: (1) joint modeling of sources, claims and crowd annotators in a PGM improves the predictive performance and interpretability for predicting claim veracity; and (2) our variational inference method achieves scalably fast parameter estimation, with only modest degradation in performance compared to Gibbs sampling. Regarding model transparency, we designed and deployed a prototype fact-checker Web tool, including a visual interface for explaining model predictions. Results of a small user study indicate that model explanations improve user satisfaction and trust in model predictions. We share our web demo, model source code, and the 13K crowd labels we collected.

Downloads

Published

2018-04-25

How to Cite

Nguyen, A., Kharosekar, A., Lease, M., & Wallace, B. (2018). An Interpretable Joint Graphical Model for Fact-Checking From Crowds. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11487

Issue

Section

AAAI Technical Track: Human-AI Collaboration