Computationally Tractable Riemannian Manifolds for Graph Embeddings

Authors

  • Calin Cruceru ETH Zurich
  • Gary Becigneul ETH Zurich MIT
  • Octavian-Eugen Ganea ETH Zurich MIT

Keywords:

Representation Learning, Graph-based Machine Learning

Abstract

Representing graphs as sets of node embeddings in certain curved Riemannian manifolds has recently gained momentum in machine learning due to their desirable geometric inductive biases (e.g., hierarchical structures benefit from hyperbolic geometry). However, going beyond embedding spaces of constant sectional curvature, while potentially more representationally powerful, proves to be challenging as one can easily lose the appeal of computationally tractable tools such as geodesic distances or Riemannian gradients. Here, we explore two computationally efficient matrix manifolds, showcasing how to learn and optimize graph embeddings in these Riemannian spaces. Empirically, we demonstrate consistent improvements over Euclidean geometry while often outperforming hyperbolic and elliptical embeddings based on various metrics that capture different graph properties. Our results serve as new evidence for the benefits of non-Euclidean embeddings in machine learning pipelines.

Downloads

Published

2021-05-18

How to Cite

Cruceru, C., Becigneul, G., & Ganea, O.-E. (2021). Computationally Tractable Riemannian Manifolds for Graph Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7133-7141. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16877

Issue

Section

AAAI Technical Track on Machine Learning I