RelNN: A Deep Neural Model for Relational Learning

Authors

  • Seyed Mehran Kazemi University of British Columbia
  • David Poole University of British Columbia

DOI:

https://doi.org/10.1609/aaai.v32i1.12111

Keywords:

Artificial Intelligence, Machine Learning, Relational Learning, Deep Learning, Relational Logistic Regression, Markov Logic Networks, Symbolic Neural Models, Deep Relational Learning, Statistical Relational Learning, Statistical Relational AI

Abstract

Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.

Downloads

Published

2018-04-26

How to Cite

Kazemi, S. M., & Poole, D. (2018). RelNN: A Deep Neural Model for Relational Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12111

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty