TY - JOUR AU - Ibrahim, Mohamed AU - Pal, Christopher AU - Pesant, Gilles PY - 2015/02/18 Y2 - 2024/03/28 TI - Exploiting Determinism to Scale Relational Inference JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 29 IS - 1 SE - Main Track: Machine Learning Applications DO - 10.1609/aaai.v29i1.9478 UR - https://ojs.aaai.org/index.php/AAAI/article/view/9478 SP - AB - <p> One key challenge in statistical relational learning (SRL) is  scalable inference. Unfortunately, most real-world problems in SRL have expressive models that translate into large grounded networks, representing a bottleneck for any inference method and weakening its scalability. In this paper we introduce Preference Relaxation (PR), a two-stage strategy that uses the determinism present in the underlying model to improve the scalability of relational inference. The basic idea of PR is that if the underlying model involves mandatory (i.e. hard) constraints as well as preferences (i.e. soft constraints) then it is potentially wasteful to allocate memory for all constraints in advance when performing inference. To avoid this, PR starts by relaxing preferences and performing inference with hard constraints only. It then removes variables that violate hard constraints, thereby avoiding irrelevant computations involving preferences. In addition it uses the removed variables to enlarge the evidence database. This reduces the effective size of the grounded network. Our approach is general and can be applied to various inference methods in relational domains. Experiments on real-world applications show how PR substantially scales relational inference with a minor impact on accuracy. </p> ER -