Rethinking Propagation for Unsupervised Graph Domain Adaptation
DOI:
https://doi.org/10.1609/aaai.v38i12.29304Keywords:
ML: Graph-based Machine Learning, ML: Transfer, Domain Adaptation, Multi-Task LearningAbstract
Unsupervised Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unlabelled target graph in order to address the distribution shifts between graph domains. Previous works have primarily focused on aligning data from the source and target graph in the representation space learned by graph neural networks (GNNs). However, the inherent generalization capability of GNNs has been largely overlooked. Motivated by our empirical analysis, we reevaluate the role of GNNs in graph domain adaptation and uncover the pivotal role of the propagation process in GNNs for adapting to different graph domains. We provide a comprehensive theoretical analysis of UGDA and derive a generalization bound for multi-layer GNNs. By formulating GNN Lipschitz for k-layer GNNs, we show that the target risk bound can be tighter by removing propagation layers in source graph and stacking multiple propagation layers in target graph. Based on the empirical and theoretical analysis mentioned above, we propose a simple yet effective approach called A2GNN for graph domain adaptation. Through extensive experiments on real-world datasets, we demonstrate the effectiveness of our proposed A2GNN framework.Downloads
Published
2024-03-24
How to Cite
Liu, M., Fang, Z., Zhang, Z., Gu, M., Zhou, S., Wang, X., & Bu, J. (2024). Rethinking Propagation for Unsupervised Graph Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13963-13971. https://doi.org/10.1609/aaai.v38i12.29304
Issue
Section
AAAI Technical Track on Machine Learning III