Learning Branching Heuristics for Propositional Model Counting

Authors

  • Pashootan Vaezipoor University of Toronto
  • Gil Lederman UC Berkeley
  • Yuhuai Wu University of Toronto Vector Institute
  • Chris Maddison University of Toronto Vector Institute
  • Roger B Grosse University of Toronto Vector Institute
  • Sanjit A. Seshia UC Berkeley
  • Fahiem Bacchus University of Toronto

DOI:

https://doi.org/10.1609/aaai.v35i14.17474

Keywords:

Heuristic Search, Satisfiability, Neuro-Symbolic AI (NSAI), Solvers and Tools

Abstract

Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.

Downloads

Published

2021-05-18

How to Cite

Vaezipoor, P., Lederman, G., Wu, Y., Maddison, C., Grosse, R. B., Seshia, S. A., & Bacchus, F. (2021). Learning Branching Heuristics for Propositional Model Counting. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12427-12435. https://doi.org/10.1609/aaai.v35i14.17474

Issue

Section

AAAI Technical Track on Search and Optimization