Robust Graph Meta-Learning via Manifold Calibration with Proxy Subgraphs

Authors

  • Zhenzhong Wang The Hong Kong Polytechnic University
  • Lulu Cao Xiamen University
  • Wanyu Lin The Hong Kong Polytechnic University
  • Min Jiang Xiamen University
  • Kay Chen Tan The Hong Kong Polytechnic University

DOI:

https://doi.org/10.1609/aaai.v37i12.26776

Keywords:

General

Abstract

Graph meta-learning has become a preferable paradigm for graph-based node classification with long-tail distribution, owing to its capability of capturing the intrinsic manifold of support and query nodes. Despite the remarkable success, graph meta-learning suffers from severe performance degradation when training on graph data with structural noise. In this work, we observe that the structural noise may impair the smoothness of the intrinsic manifold supporting the support and query nodes, leading to the poor transferable priori of the meta-learner. To address the issue, we propose a new approach for graph meta-learning that is robust against structural noise, called Proxy subgraph-based Manifold Calibration method (Pro-MC). Concretely, a subgraph generator is designed to generate proxy subgraphs that can calibrate the smoothness of the manifold. The proxy subgraph compromises two types of subgraphs with two biases, thus preventing the manifold from being rugged and straightforward. By doing so, our proposed meta-learner can obtain generalizable and transferable prior knowledge. In addition, we provide a theoretical analysis to illustrate the effectiveness of Pro-MC. Experimental results have demonstrated that our approach can achieve state-of-the-art performance under various structural noises.

Downloads

Published

2023-06-26

How to Cite

Wang, Z., Cao, L., Lin, W., Jiang, M., & Tan, K. C. (2023). Robust Graph Meta-Learning via Manifold Calibration with Proxy Subgraphs. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15224-15232. https://doi.org/10.1609/aaai.v37i12.26776

Issue

Section

AAAI Special Track on Safe and Robust AI