Multi-modal Deepfake Detection via Multi-task Audio-Visual Prompt Learning
DOI:
https://doi.org/10.1609/aaai.v39i1.32042Abstract
With the malicious use and dissemination of multi-modal deepfake videos, researchers start to investigate multi-modal deepfake detection. Unfortunately, most of the existing methods tune all the parameters of the deep network with limited speech video datasets and are trained under coarse-grained consistency supervision, which hinders their generalization ability in practical scenarios. To solve these problems, in this paper, we propose the first multi-task audio-visual prompt learning method for multi-modal deepfake video detection, by exploiting multiple foundation models. Specifically, we construct a two-stream multi-task learning architecture and propose sequential visual prompts and short-time audio prompts to extract multi-modal features, which are aligned at the frame level and utilized in subsequent fine-grained feature matching and fusion. Due to the natural alignment of visual content and audio signal in real data, we propose a frame-level cross-modal feature matching loss function to learn the fine-grained audio-visual consistency. Comprehensive experiments demonstrate the effectiveness and superior generalization ability of our method against the state-of-the-art methods.Downloads
Published
2025-04-11
How to Cite
Miao, H., Guo, Y., Liu, Z., & Wang, Y. (2025). Multi-modal Deepfake Detection via Multi-task Audio-Visual Prompt Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(1), 612-621. https://doi.org/10.1609/aaai.v39i1.32042
Issue
Section
AAAI Technical Track on Application Domains