Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables
DOI:
https://doi.org/10.1609/aaai.v35i9.17004Keywords:
Probabilistic Graphical ModelsAbstract
Probabilistic graphical models (PGMs) provide a compact representation of knowledge that can be queried in a flexible way: after learning the parameters of a graphical model once, new probabilistic queries can be answered at test time without retraining. However, when using undirected PGMS with hidden variables, two sources of error typically compound in all but the simplest models (a) learning error (both computing the partition function and integrating out the hidden variables is intractable); and (b) prediction error (exact inference is also intractable). Here we introduce query training (QT), a mechanism to learn a PGM that is optimized for the approximate inference algorithm that will be paired with it. The resulting PGM is a worse model of the data (as measured by the likelihood), but it is tuned to produce better marginals for a given inference algorithm. Unlike prior works, our approach preserves the querying flexibility of the original PGM: at test time, we can estimate the marginal of any variable given any partial evidence. We demonstrate experimentally that QT can be used to learn a challenging 8-connected grid Markov random field with hidden variables and that it consistently outperforms the state-of-the-art AdVIL when tested on three undirected models across multiple datasets.Downloads
Published
2021-05-18
How to Cite
Lázaro-Gredilla, M., Lehrach, W., Gothoskar, N., Zhou, G., Dedieu, A., & George, D. (2021). Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8252-8260. https://doi.org/10.1609/aaai.v35i9.17004
Issue
Section
AAAI Technical Track on Machine Learning II