Approximate Lifting Techniques for Belief Propagation


  • Parag Singla Indian Institute of Technology Delhi
  • Aniruddh Nath University of Washington
  • Pedro Domingos University of Washington



Statistical Relational AI, Lifted Inference, Approximate Inference


Many AI applications need to explicitly represent relational structure as well as handle uncertainty. First order probabilistic models combine the power of logic and probability to deal with such domains. A naive approach to inference in these models is to propositionalize the whole theory and carry out the inference on the ground network. Lifted inference techniques (such as lifted belief propagation; Singla and Domingos 2008) provide a more scalable approach to inference by combining together groups of objects which behave identically. In many cases, constructing the lifted network can itself be quite costly. In addition, the exact lifted network is often very close in size to the fully propositionalized model. To overcome these problems, we present approximate lifted inference, which groups together similar but distinguishable objects and treats them as if they were identical. Early stopping terminates the execution of the lifted network construction at an early stage resulting in a coarser network. Noise-tolerant hypercubes allow for marginal errors in the representation of the lifted network itself. Both of our algorithms can significantly speed up the process of lifted network construction as well as result in much smaller models. The coarseness of the approximation can be adjusted depending on the accuracy required, and we can bound the resulting error. Extensive evaluation on six domains demonstrates great efficiency gains with only minor (or no) loss in accuracy.




How to Cite

Singla, P., Nath, A., & Domingos, P. (2014). Approximate Lifting Techniques for Belief Propagation. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1).



AAAI Technical Track: Reasoning under Uncertainty