Approximate Probabilistic Inference via Word-Level Counting

Authors

  • Supratik Chakraborty Indian Institute of Technology, Bombay
  • Kuldeep Meel Rice University
  • Rakesh Mistry Indian Institute of Technology, Bombay
  • Moshe Vardi Rice University

DOI:

https://doi.org/10.1609/aaai.v30i1.10416

Keywords:

Universal Hashing, SMT, Model Counting

Abstract

Hashing-based model counting has emerged as a promising approach for large-scale probabilistic inference on graphical models. A key component of these techniques is the use of xor-based 2-universal hash functions that operate over Boolean domains. Many counting problems arising in probabilistic inference are, however, naturally encoded over finite discrete domains. Techniques based on bit-level (or Boolean) hash functions require these problems to be propositionalized, making it impossible to leverage the remarkable progress made in SMT (Satisfiability Modulo Theory) solvers that can reason directly over words (or bit-vectors). In this work, we present the first approximate model counter that uses word-level hashing functions, and can directly leverage the power of sophisticated SMT solvers. Empirical evaluation over an extensive suite of benchmarks demonstrates the promise of the approach.

Downloads

Published

2016-03-05

How to Cite

Chakraborty, S., Meel, K., Mistry, R., & Vardi, M. (2016). Approximate Probabilistic Inference via Word-Level Counting. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10416

Issue

Section

Technical Papers: Reasoning under Uncertainty