The Linearization of Belief Propagation on Pairwise Markov Random Fields


  • Wolfgang Gatterbauer Carnegie Mellon University



Belief propagation, semi-supervised learning, node classification, Markov Random Fields


Belief Propagation (BP) is a widely used approximation for exact probabilistic inference in graphical models, such as Markov Random Fields (MRFs). In graphs with cycles, however, no exact convergence guarantees for BP are known, in general. For the case when all edges in the MRF carry the same symmetric, doubly stochastic potential, recent works have proposed to approximate BP by linearizing the update equations around default values, which was shown to work well for the problem of node classification. The present paper generalizes all prior work and derives an approach that approximates loopy BP on any pairwise MRF with the problem of solving a linear equation system. This approach combines exact convergence guarantees and a fast matrix implementation with the ability to model heterogenous networks. Experiments on synthetic graphs with planted edge potentials show that the linearization has comparable labeling accuracy as BP for graphs with weak potentials, while speeding-up inference by orders of magnitude.




How to Cite

Gatterbauer, W. (2017). The Linearization of Belief Propagation on Pairwise Markov Random Fields. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1).



AAAI Technical Track: Reasoning under Uncertainty