Exploiting Determinism to Scale Relational Inference

Authors

  • Mohamed Ibrahim Ecole Polytechnique Montreal
  • Christopher Pal Ecole Polytechnique Montreal
  • Gilles Pesant Ecole Polytechnique Montreal

DOI:

https://doi.org/10.1609/aaai.v29i1.9478

Keywords:

Statistical Relational Learning, Relational Inference, Markov Logic Networks

Abstract

One key challenge in statistical relational learning (SRL) is  scalable inference. Unfortunately, most real-world problems in SRL have expressive models that translate into large grounded networks, representing a bottleneck for any inference method and weakening its scalability. In this paper we introduce Preference Relaxation (PR), a two-stage strategy that uses the determinism present in the underlying model to improve the scalability of relational inference. The basic idea of PR is that if the underlying model involves mandatory (i.e. hard) constraints as well as preferences (i.e. soft constraints) then it is potentially wasteful to allocate memory for all constraints in advance when performing inference. To avoid this, PR starts by relaxing preferences and performing inference with hard constraints only. It then removes variables that violate hard constraints, thereby avoiding irrelevant computations involving preferences. In addition it uses the removed variables to enlarge the evidence database. This reduces the effective size of the grounded network. Our approach is general and can be applied to various inference methods in relational domains. Experiments on real-world applications show how PR substantially scales relational inference with a minor impact on accuracy.

Downloads

Published

2015-02-18

How to Cite

Ibrahim, M., Pal, C., & Pesant, G. (2015). Exploiting Determinism to Scale Relational Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9478

Issue

Section

Main Track: Machine Learning Applications