TY - JOUR
AU - Srinivasan, Sriram
AU - Babaki, Behrouz
AU - Farnadi, Golnoosh
AU - Getoor, Lise
PY - 2019/07/17
Y2 - 2024/07/19
TI - Lifted Hinge-Loss Markov Random Fields
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 33
IS - 01
SE - AAAI Technical Track: Reasoning under Uncertainty
DO - 10.1609/aaai.v33i01.33017975
UR - https://ojs.aaai.org/index.php/AAAI/article/view/4798
SP - 7975-7983
AB - <p>Statistical relational learning models are powerful tools that combine ideas from first-order logic with probabilistic graphical models to represent complex dependencies. Despite their success in encoding large problems with a compact set of weighted rules, performing inference over these models is often challenging. In this paper, we show how to effectively combine two powerful ideas for scaling inference for large graphical models. The first idea, lifted inference, is a wellstudied approach to speeding up inference in graphical models by exploiting symmetries in the underlying problem. The second idea is to frame Maximum a posteriori (MAP) inference as a convex optimization problem and use alternating direction method of multipliers (ADMM) to solve the problem in parallel. A well-studied relaxation to the combinatorial optimization problem defined for logical Markov random fields gives rise to a <em>hinge-loss Markov random field</em> (HLMRF) for which MAP inference is a convex optimization problem. We show how the formalism introduced for coloring weighted bipartite graphs using a color refinement algorithm can be integrated with the ADMM optimization technique to take advantage of the sparse dependency structures of HLMRFs. Our proposed approach, <em>lifted hinge-loss Markov random fields</em> (LHL-MRFs), preserves the structure of the original problem after lifting and solves lifted inference as distributed convex optimization with ADMM. In our empirical evaluation on real-world problems, we observe up to a three times speed up in inference over HL-MRFs.</p>
ER -