GLCF: A Global-Local Multimodal Coherence Analysis Framework for Talking Face Generation Detection
DOI:
https://doi.org/10.1609/aaai.v39i1.31982Abstract
Talking face generation (TFG) allows for producing lifelike talking videos of any character using only facial images and accompanying text. Abuse of this technology could pose significant risks to society, creating the urgent need for research into corresponding detection methods. However, research in this field has been hindered by the lack of public datasets. In this paper, we construct the first large-scale multi-scenario talking face dataset (MSTF), which contains 22 audio and video forgery techniques, filling the gap of datasets in this field. The dataset covers 11 generation scenarios and more than 20 semantic scenarios, closer to the practical application scenario of TFG. Besides, we also propose a TFG detection framework, which leverages the analysis of both global and local coherence in the multimodal content of TFG videos. Therefore, a region-focused smoothness detection module (RSFDM) and a discrepancy capture-time frame aggregation module (DCTAM) are introduced to evaluate the global temporal coherence of TFG videos, aggregating multi-grained spatial information. Additionally, a visual-audio fusion module (V-AFM) is designed to evaluate audiovisual coherence within a localized temporal perspective. Comprehensive experiments demonstrate the reasonableness and challenges of our datasets, while also indicating the superiority of our proposed method compared to the state-of-the-art deepfake detection approaches.Downloads
Published
2025-04-11
How to Cite
Chen, X., Yin, Q., Liu, J., Lu, W., Luo, X., & Zhou, J. (2025). GLCF: A Global-Local Multimodal Coherence Analysis Framework for Talking Face Generation Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 39(1), 75–83. https://doi.org/10.1609/aaai.v39i1.31982
Issue
Section
AAAI Technical Track on Application Domains