TY - JOUR AU - Srinivasan, Sriram AU - Augustine, Eriq AU - Getoor, Lise PY - 2020/04/03 Y2 - 2024/03/28 TI - Tandem Inference: An Out-of-Core Streaming Algorithm for Very Large-Scale Relational Inference JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 06 SE - AAAI Technical Track: Reasoning under Uncertainty DO - 10.1609/aaai.v34i06.6588 UR - https://ojs.aaai.org/index.php/AAAI/article/view/6588 SP - 10259-10266 AB - <p>Statistical relational learning (SRL) frameworks allow users to create large, complex graphical models using a compact, rule-based representation. However, these models can quickly become prohibitively large and not fit into machine memory. In this work we address this issue by introducing a novel technique called <em>tandem inference</em> (<span style="font-variant: small-caps;">ti</span>). The primary idea of <span style="font-variant: small-caps;">ti</span> is to combine grounding and inference such that both processes happen in tandem. <span style="font-variant: small-caps;">ti</span> uses an out-of-core streaming approach to overcome memory limitations. Even when memory is not an issue, we show that our proposed approach is able to do inference faster while using less memory than existing approaches. To show the effectiveness of <span style="font-variant: small-caps;">ti</span>, we use a popular SRL framework called Probabilistic Soft Logic (PSL). We implement <span style="font-variant: small-caps;">ti</span> for PSL by proposing a gradient-based inference engine and a streaming approach to grounding. We show that we are able to run an SRL model with over 1B cliques in under nine hours and using only 10 GB of RAM; previous approaches required more than 800 GB for this model and are infeasible on common hardware. To the best of our knowledge, this is the largest SRL model ever run.</p> ER -