Self-Training Based Few-Shot Node Classification by Knowledge Distillation
DOI:
https://doi.org/10.1609/aaai.v38i14.29530Keywords:
ML: Semi-Supervised Learning, DMKM: Graph Mining, Social Network Analysis & Community, ML: Deep Learning AlgorithmsAbstract
Self-training based few-shot node classification (FSNC) methods have shown excellent performance in real applications, but they cannot make the full use of the information in the base set and are easily affected by the quality of pseudo-labels. To address these issues, this paper proposes a new self-training FSNC method by involving the representation distillation and the pseudo-label distillation. Specifically, the representation distillation includes two knowledge distillation methods (i.e., the local representation distillation and the global representation distillation) to transfer the information in the base set to the novel set. The pseudo-label distillation is designed to conduct knowledge distillation on the pseudo-labels to improve their quality. Experimental results showed that our method achieves supreme performance, compared with state-of-the-art methods. Our code and a comprehensive theoretical version are available at https://github.com/zongqianwu/KD-FSNC.Downloads
Published
2024-03-24
How to Cite
Wu, Z., Mo, Y., Zhou, P., Yuan, S., & Zhu, X. (2024). Self-Training Based Few-Shot Node Classification by Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15988–15995. https://doi.org/10.1609/aaai.v38i14.29530
Issue
Section
AAAI Technical Track on Machine Learning V