Fast Parallel and Adaptive Updates for Dual-Decomposition Solvers


  • Ozgur Sumer University of Chicago
  • Umut Acar Max-Planck Institute for Software Systems
  • Alexander Ihler University of California - Irvine
  • Ramgopal Mettu University of Massachusetts - Amherst


Dual-decomposition (DD) methods are quickly becoming important tools for estimating the minimum energy state of a graphical model. DD methods decompose a complex model into a collection of simpler subproblems that can be solved exactly (such as trees), that in combination provide upper and lower bounds on the exact solution. Subproblem choice can play a major role: larger subproblems tend to improve the bound more per iteration, while smaller subproblems enable highly parallel solvers and can benefit from re-using past solutions when there are few changes between iterations. We propose an algorithm that can balance many of these aspects to speed up convergence. Our method uses a cluster tree data structure that has been proposed for adaptive exact inference tasks, and we apply it in this paper to dual-decomposition approximate inference. This approach allows us to process large subproblems to improve the bounds at each iteration, while allowing a high degree of parallelizability and taking advantage of subproblems with sparse updates. For both synthetic inputs and a real-world stereo matching problem, we demonstrate that our algorithm is able to achieve significant improvement in convergence time.




How to Cite

Sumer, O., Acar, U., Ihler, A., & Mettu, R. (2011). Fast Parallel and Adaptive Updates for Dual-Decomposition Solvers. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 1076-1082. Retrieved from