DepthLGP: Learning Embeddings of Out-of-Sample Nodes in Dynamic Networks

Authors

  • Jianxin Ma Tsinghua University
  • Peng Cui Tsinghua University
  • Wenwu Zhu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v32i1.11271

Keywords:

Network Embedding, Gaussian Process, Dynamic Network

Abstract

Network embedding algorithms to date are primarily designed for static networks, where all nodes are known before learning. How to infer embeddings for out-of-sample nodes, i.e. nodes that arrive after learning, remains an open problem. The problem poses great challenges to existing methods, since the inferred embeddings should preserve intricate network properties such as high-order proximity, share similar characteristics (i.e. be of a homogeneous space) with in-sample node embeddings, and be of low computational cost. To overcome these challenges, we propose a Deeply Transformed High-order Laplacian Gaussian Process (DepthLGP) method to infer embeddings for out-of-sample nodes. DepthLGP combines the strength of nonparametric probabilistic modeling and deep learning. In particular, we design a high-order Laplacian Gaussian process (hLGP) to encode network properties, which permits fast and scalable inference. In order to further ensure homogeneity, we then employ a deep neural network to learn a nonlinear transformation from latent states of the hLGP to node embeddings. DepthLGP is general, in that it is applicable to embeddings learned by any network embedding algorithms. We theoretically prove the expressive power of DepthLGP, and conduct extensive experiments on real-world networks. Empirical results demonstrate that our approach can achieve significant performance gain over existing approaches.

Downloads

Published

2018-04-25

How to Cite

Ma, J., Cui, P., & Zhu, W. (2018). DepthLGP: Learning Embeddings of Out-of-Sample Nodes in Dynamic Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11271