Disentangled Generation-Based Prototypical Alignment for Few-Shot Unsupervised Domain Adaptation in Graph-Level Anomaly Detection
DOI:
https://doi.org/10.1609/aaai.v40i29.39639Abstract
Graph-Level Anomaly Detection (GLAD) seeks to identify anomalous graphs within graph datasets, which has significant applications across diverse real-world fields. Most existing GLAD methods are trained in an unsupervised manner due to high costs for labeling, resulting in sub-optimal performance when compared to supervised methods. To fill this gap, we propose a Disentangled Generation-Based Prototypical Alignment (DGPA) method that extends graph-level anomaly detection to Few-Shot Unsupervised Domain Adaptation (FUDA) setting, aiming to identify anomalous graphs from a set of unlabeled graphs (target domain) by using partially labeled graphs from a different but related domain (source domain), which fulfills the practical requirement of transferring anomaly knowledge. This is specifically achieved through a dedicated Disentangled Sample Generation module, which addresses label scarcity by generating faithful samples with disentangled representation learning grounded in Information Bottleneck principle, along with a Graph-based Prototypical Self-Supervision module, which alleviates domain shift by encoding and aligning semantic structures in the shared latent space across domains in a self-supervised manner. Extensive experiments on five benchmark datasets reveal the effectiveness of our proposed DGPA.Downloads
Published
2026-03-14
How to Cite
Ni, Z., Zhang, C., Wan, H., & Zhao, X. (2026). Disentangled Generation-Based Prototypical Alignment for Few-Shot Unsupervised Domain Adaptation in Graph-Level Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 40(29), 24558-24566. https://doi.org/10.1609/aaai.v40i29.39639
Issue
Section
AAAI Technical Track on Machine Learning VI