Identity-Aware Vision-Language Model for Explainable Face Forgery Detection

Authors

  • Junhao Xu College of Computer Science and Artificial Intelligence, Fudan University
  • Jingjing Chen College of Computer Science and Artificial Intelligence, Fudan University Institute of Trustworthy Embodied AI, Fudan University
  • Yang Jiao College of Computer Science and Artificial Intelligence, Fudan University
  • Jiacheng Zhang College of Computer Science and Artificial Intelligence, Fudan University
  • Zhiyu Tan College of Computer Science and Artificial Intelligence, Fudan University Shanghai Academy of Artificial Intelligence for Science
  • Hao Li College of Computer Science and Artificial Intelligence, Fudan University Shanghai Academy of Artificial Intelligence for Science
  • Yu-Gang Jiang Institute of Trustworthy Embodied AI, Fudan University

DOI:

https://doi.org/10.1609/aaai.v40i13.38108

Abstract

Recent advances in generative artificial intelligence have enabled the creation of highly realistic image forgeries, raising significant concerns about digital media authenticity. While existing detection methods demonstrate promising results on benchmark datasets, they face critical limitations in real-world applications. First, existing detectors typically fail to detect semantic inconsistencies with the person’s identity, such as implausible behaviors or incompatible environmental contexts in given images. Second, these methods rely heavily on low-level visual cues, making them effective for known forgeries but less reliable against new or unseen manipulation techniques. To address these challenges, we present a novel personalized vision-language model (VLM) that integrates low-level visual artifact analysis and high-level semantic inconsistency detection. Unlike previous VLM-based methods, our approach avoids resource-intensive supervised fine-tuning that often struggles to preserve distinct identity characteristics. Instead, we employ a lightweight method that dynamically encodes identity-specific information into specialized identifier tokens. This design enables the model to learn distinct identity characteristics while maintaining robust generalization capabilities. We further enhance detection capabilities through a lightweight detection adapter that extracts fine-grained information from shallow features of the vision encoder, preserving critical low-level evidence. Comprehensive experiments demonstrate that our approach achieves 94.25% accuracy and 94.08% F1 score, outperforming both traditional forgery detectors and general VLMs while requiring only 10 extra tokens.

Published

2026-03-14

How to Cite

Xu, J., Chen, J., Jiao, Y., Zhang, J., Tan, Z., Li, H., & Jiang, Y.-G. (2026). Identity-Aware Vision-Language Model for Explainable Face Forgery Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 40(13), 11278-11286. https://doi.org/10.1609/aaai.v40i13.38108

Issue

Section

AAAI Technical Track on Computer Vision X