Learning Signed Network Embedding via Graph Attention


  • Yu Li Jilin University
  • Yuan Tian Jilin University
  • Jiawei Zhang Florida State University
  • Yi Chang Jilin University




Learning the low-dimensional representations of graphs (i.e., network embedding) plays a critical role in network analysis and facilitates many downstream tasks. Recently graph convolutional networks (GCNs) have revolutionized the field of network embedding, and led to state-of-the-art performance in network analysis tasks such as link prediction and node classification. Nevertheless, most of the existing GCN-based network embedding methods are proposed for unsigned networks. However, in the real world, some of the networks are signed, where the links are annotated with different polarities, e.g., positive vs. negative. Since negative links may have different properties from the positive ones and can also significantly affect the quality of network embedding. Thus in this paper, we propose a novel network embedding framework SNEA to learn Signed Network Embedding via graph Attention. In particular, we propose a masked self-attentional layer, which leverages self-attention mechanism to estimate the importance coefficient for pair of nodes connected by different type of links during the embedding aggregation process. Then SNEA utilizes the masked self-attentional layers to aggregate more important information from neighboring nodes to generate the node embeddings based on balance theory. Experimental results demonstrate the effectiveness of the proposed framework through signed link prediction task on several real-world signed network datasets.




How to Cite

Li, Y., Tian, Y., Zhang, J., & Chang, Y. (2020). Learning Signed Network Embedding via Graph Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4772-4779. https://doi.org/10.1609/aaai.v34i04.5911



AAAI Technical Track: Machine Learning