DropMessage: Unifying Random Dropping for Graph Neural Networks
DOI:
https://doi.org/10.1609/aaai.v37i4.25545Keywords:
DMKM: Graph Mining, Social Network Analysis & Community MiningAbstract
Graph Neural Networks (GNNs) are powerful tools for graph representation learning. Despite their rapid development, GNNs also face some challenges, such as over-fitting, over-smoothing, and non-robustness. Previous works indicate that these problems can be alleviated by random dropping methods, which integrate augmented data into models by randomly masking parts of the input. However, some open problems of random dropping on GNNs remain to be solved. First, it is challenging to find a universal method that are suitable for all cases considering the divergence of different datasets and models. Second, augmented data introduced to GNNs causes the incomplete coverage of parameters and unstable training process. Third, there is no theoretical analysis on the effectiveness of random dropping methods on GNNs. In this paper, we propose a novel random dropping method called DropMessage, which performs dropping operations directly on the propagated messages during the message-passing process. More importantly, we find that DropMessage provides a unified framework for most existing random dropping methods, based on which we give theoretical analysis of their effectiveness. Furthermore, we elaborate the superiority of DropMessage: it stabilizes the training process by reducing sample variance; it keeps information diversity from the perspective of information theory, enabling it become a theoretical upper bound of other methods. To evaluate our proposed method, we conduct experiments that aims for multiple tasks on five public datasets and two industrial datasets with various backbone models. The experimental results show that DropMessage has the advantages of both effectiveness and generalization, and can significantly alleviate the problems mentioned above. A detailed version with full appendix can be found on arXiv: https://arxiv.org/abs/2204.10037.Downloads
Published
2023-06-26
How to Cite
Fang, T., Xiao, Z., Wang, C., Xu, J., Yang, X., & Yang, Y. (2023). DropMessage: Unifying Random Dropping for Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4267-4275. https://doi.org/10.1609/aaai.v37i4.25545
Issue
Section
AAAI Technical Track on Data Mining and Knowledge Management