Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone?


  • Benjamin D. Horne Rensselaer Polytechnic Institute
  • Dorit Nevo Rensselaer Polytechnic Institute
  • John O’Donovan University of California Santa Barbara
  • Jin-Hee Cho Virginia Polytechnic Institute and State University
  • Sibel Adalı Rensselaer Polytechnic Institute




With the spread of false and misleading information in current news, many algorithmic tools have been introduced with the aim of assessing bias and reliability in written content. However, there has been little work exploring how effective these tools are at changing human perceptions of content. To this end, we conduct a study with 654 participants to understand if algorithmic assistance improves the accuracy of reliability and bias perceptions, and whether there is a difference in the effectiveness of the AI assistance for different types of news consumers. We find that AI assistance with featurebased explanations improves the accuracy of news perceptions. However, some consumers are helped more than others. Specifically, we find that participants who read and share news often on social media are worse at recognizing bias and reliability issues in news articles than those who do not, while frequent news readers and those familiar with politics perform much better. We discuss these differences and their implication to offer insights for future research.




How to Cite

Horne, B. D., Nevo, D., O’Donovan, J., Cho, J.-H., & Adalı, S. (2019). Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone?. Proceedings of the International AAAI Conference on Web and Social Media, 13(01), 247-256. https://doi.org/10.1609/icwsm.v13i01.3226