See How You Read? Multi-Reading Habits Fusion Reasoning for Multi-Modal Fake News Detection

Authors

  • Lianwei Wu National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science, Northwestern Polytechnical University, China Research & Development Institute of Northwestern Polytechnical University in Shenzhen, China Chongqing Science and Technology Innovation Center of Northwestern Polytechnical University, China
  • Pusheng Liu National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science, Northwestern Polytechnical University, China
  • Yanning Zhang National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science, Northwestern Polytechnical University, China

DOI:

https://doi.org/10.1609/aaai.v37i11.26609

Keywords:

SNLP: Applications, APP: Humanities & Computational Social Science, SNLP: Sentence-Level Semantics and Textual Inference, SNLP: Text Mining

Abstract

The existing approaches based on different neural networks automatically capture and fuse the multimodal semantics of news, which have achieved great success for fake news detection. However, they still suffer from the limitations of both shallow fusion of multimodal features and less attention to the inconsistency between different modalities. To overcome them, we propose multi-reading habits fusion reasoning networks (MRHFR) for multi-modal fake news detection. In MRHFR, inspired by people's different reading habits for multimodal news, we summarize three basic cognitive reading habits and put forward cognition-aware fusion layer to learn the dependencies between multimodal features of news, so as to deepen their semantic-level integration. To explore the inconsistency of different modalities of news, we develop coherence constraint reasoning layer from two perspectives, which first measures the semantic consistency between the comments and different modal features of the news, and then probes the semantic deviation caused by unimodal features to the multimodal news content through constraint strategy. Experiments on two public datasets not only demonstrate that MRHFR not only achieves the excellent performance but also provides a new paradigm for capturing inconsistencies between multi-modal news.

Downloads

Published

2023-06-26

How to Cite

Wu, L., Liu, P., & Zhang, Y. (2023). See How You Read? Multi-Reading Habits Fusion Reasoning for Multi-Modal Fake News Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13736-13744. https://doi.org/10.1609/aaai.v37i11.26609

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing