On Parameter Tying by Quantization

Authors

  • Li Chou The University of Texas at Dallas
  • Somdeb Sarkhel The University of Texas at Dallas
  • Nicholas Ruozzi The University of Texas at Dallas
  • Vibhav Gogate The University of Texas at Dallas

DOI:

https://doi.org/10.1609/aaai.v30i1.10429

Keywords:

Quantization, Learning Graphical Models, Parameter Tying, Importance Sampling

Abstract

The maximum likelihood estimator (MLE) is generally asymptotically consistent but is susceptible to over-fitting. To combat this problem, regularization methods which reduce the variance at the cost of (slightly) increasing the bias are often employed in practice. In this paper, we present an alternative variance reduction (regularization) technique that quantizes the MLE estimates as a post processing step, yielding a smoother model having several tied parameters. We provide and prove error bounds for our new technique and demonstrate experimentally that it often yields models having higher test-set log-likelihood than the ones learned using the MLE. We also propose a new importance sampling algorithm for fast approximate inference in models having several tied parameters. Our experiments show that our new inference algorithm is superior to existing approaches such as Gibbs sampling and MC-SAT on models having tied parameters, learned using our quantization-based approach.

Downloads

Published

2016-03-05

How to Cite

Chou, L., Sarkhel, S., Ruozzi, N., & Gogate, V. (2016). On Parameter Tying by Quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10429

Issue

Section

Technical Papers: Reasoning under Uncertainty