Flaky Performances When Pretraining on Relational Databases (Student Abstract)

Authors

  • Shengchao Liu Mila, Québec AI Institute Université de Montréal
  • David Vazquez ServiceNow Research
  • Jian Tang Mila, Québec AI Institute HEC Montréal CIFAR AI Chair
  • Pierre-André Noël ServiceNow Research

DOI:

https://doi.org/10.1609/aaai.v37i13.26993

Keywords:

Contrastive Learning, Pretraining, Relational Database, RDB, Graph

Abstract

We explore the downstream task performances for graph neural network (GNN) self-supervised learning (SSL) methods trained on subgraphs extracted from relational databases (RDBs). Intuitively, this joint use of SSL and GNNs should allow to leverage more of the available data, which could translate to better results. However, we found that naively porting contrastive SSL techniques can cause ``negative transfer'': linear evaluation on fixed representation from a pretrained model performs worse than on representations from the randomly-initialized model. Based on the conjecture that contrastive SSL conflicts with the message passing layers of the GNN, we propose InfoNode: a contrastive loss aiming to maximize the mutual information between a node's initial- and final-layer representation. The primary empirical results support our conjecture and the effectiveness of InfoNode.

Downloads

Published

2024-07-15

How to Cite

Liu, S., Vazquez, D., Tang, J., & Noël, P.-A. (2024). Flaky Performances When Pretraining on Relational Databases (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16266-16267. https://doi.org/10.1609/aaai.v37i13.26993