Signed Laplacian Graph Neural Networks

Authors

  • Yu Li College of Computer Science and Technology, Jilin University, China Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, Ministry of Education, China
  • Meng Qu Mila - Québec AI Institute, Canada Univesité de Montréal, Canada
  • Jian Tang Mila - Québec AI Institute, Canada HEC Montréal, Canada CIFAR AI Research Chair, Canada
  • Yi Chang School of Artificial Intelligence, Jilin University, China International Center of Future Science, Jilin University, China Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, Ministry of Education, China

DOI:

https://doi.org/10.1609/aaai.v37i4.25565

Keywords:

DMKM: Graph Mining, Social Network Analysis & Community Mining, DMKM: Rule Mining & Pattern Mining

Abstract

This paper studies learning meaningful node representations for signed graphs, where both positive and negative links exist. This problem has been widely studied by meticulously designing expressive signed graph neural networks, as well as capturing the structural information of the signed graph through traditional structure decomposition methods, e.g., spectral graph theory. In this paper, we propose a novel signed graph representation learning framework, called Signed Laplacian Graph Neural Network (SLGNN), which combines the advantages of both. Specifically, based on spectral graph theory and graph signal processing, we first design different low-pass and high-pass graph convolution filters to extract low-frequency and high-frequency information on positive and negative links, respectively, and then combine them into a unified message passing framework. To effectively model signed graphs, we further propose a self-gating mechanism to estimate the impacts of low-frequency and high-frequency information during message passing. We mathematically establish the relationship between the aggregation process in SLGNN and signed Laplacian regularization in signed graphs, and theoretically analyze the expressiveness of SLGNN. Experimental results demonstrate that SLGNN outperforms various competitive baselines and achieves state-of-the-art performance.

Downloads

Published

2023-06-26

How to Cite

Li, Y., Qu, M., Tang, J., & Chang, Y. (2023). Signed Laplacian Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4444-4452. https://doi.org/10.1609/aaai.v37i4.25565

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management