Scaling-Up Split-Merge MCMC with Locality Sensitive Sampling (LSS)

Authors

  • Chen Luo Rice University
  • Anshumali Shrivastava Rice University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014464

Abstract

Split-Merge MCMC (Monte Carlo Markov Chain) is one of the essential and popular variants of MCMC for problems when an MCMC state consists of an unknown number of components. It is well known that state-of-the-art methods for split-merge MCMC do not scale well. Strategies for rapid mixing requires smart and informative proposals to reduce the rejection rate. However, all known smart proposals involve expensive operations to suggest informative transitions. As a result, the cost of each iteration is prohibitive for massive scale datasets. It is further known that uninformative but computationally efficient proposals, such as random split-merge, leads to extremely slow convergence. This tradeoff between mixing time and per update cost seems hard to get around.

We leverage some unique properties of weighted MinHash, which is a popular LSH, to design a novel class of split-merge proposals which are significantly more informative than random sampling but at the same time efficient to compute. Overall, we obtain a superior tradeoff between convergence and per update cost. As a direct consequence, our proposals are around 6X faster than the state-of-the-art sampling methods on two large real datasets KDDCUP and PubMed with several millions of entities and thousands of clusters.

Downloads

Published

2019-07-17

How to Cite

Luo, C., & Shrivastava, A. (2019). Scaling-Up Split-Merge MCMC with Locality Sensitive Sampling (LSS). Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4464-4471. https://doi.org/10.1609/aaai.v33i01.33014464

Issue

Section

AAAI Technical Track: Machine Learning