Pioneering Explainable Video Fact-Checking with a New Dataset and Multi-role Multimodal Model Approach

Authors

  • Kaipeng Niu National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, China
  • Danni Xu National University of Singapore, Singapore
  • Bingjian Yang National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, China
  • Wenxuan Liu Peking University, China
  • Zheng Wang National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, China

DOI:

https://doi.org/10.1609/aaai.v39i27.35048

Abstract

Existing video fact-checking datasets often lack detailed evidence and explanations, compromising the reliability and interpretability of fact-checking methods. To address these gaps, we developed a novel dataset featuring comprehensive annotations for each news item, including veracity labels, the rationales behind these labels, and supporting evidence. This dataset significantly enhances models' ability to accurately identify and explain video content. We also present an explainable automatic framework 3MFact, utilizing Multi-role Multimodal Models for video Fact-checking. Our framework iteratively gathers and synthesizes online evidence to progressively determine the veracity label, generating three key outputs: veracity label, rationale, and supported evidence. We aim for this work to be a pioneering effort, providing robust support for the field of video fact-checking.

Downloads

Published

2025-04-11

How to Cite

Niu, K., Xu, D., Yang, B., Liu, W., & Wang, Z. (2025). Pioneering Explainable Video Fact-Checking with a New Dataset and Multi-role Multimodal Model Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 39(27), 28276-28283. https://doi.org/10.1609/aaai.v39i27.35048