Propagation Tree Is Not Deep: Adaptive Graph Contrastive Learning Approach for Rumor Detection

Authors

  • Chaoqun Cui School of Computer and Information Technology & Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University
  • Caiyan Jia School of Computer and Information Technology & Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v38i1.27757

Keywords:

APP: Misinformation & Fake News, NLP: Applications, NLP: Text Classification

Abstract

Rumor detection on social media has become increasingly important. Most existing graph-based models presume rumor propagation trees (RPTs) have deep structures and learn sequential stance features along branches. However, through statistical analysis on real-world datasets, we find RPTs exhibit wide structures, with most nodes being shallow 1-level replies. To focus learning on intensive substructures, we propose Rumor Adaptive Graph Contrastive Learning (RAGCL) method with adaptive view augmentation guided by node centralities. We summarize three principles for RPT augmentation: 1) exempt root nodes, 2) retain deep reply nodes, 3) preserve lower-level nodes in deep sections. We employ node dropping, attribute masking and edge dropping with probabilities from centrality-based importance scores to generate views. A graph contrastive objective then learns robust rumor representations. Extensive experiments on four benchmark datasets demonstrate RAGCL outperforms state-of-the-art methods. Our work reveals the wide-structure nature of RPTs and contributes an effective graph contrastive learning approach tailored for rumor detection through principled adaptive augmentation. The proposed principles and augmentation techniques can potentially benefit other applications involving tree-structured graphs.

Published

2024-03-25

How to Cite

Cui, C., & Jia, C. (2024). Propagation Tree Is Not Deep: Adaptive Graph Contrastive Learning Approach for Rumor Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 73-81. https://doi.org/10.1609/aaai.v38i1.27757

Issue

Section

AAAI Technical Track on Application Domains