Learning Mixtures of MLNs

Authors

  • Mohammad Islam The University of Memphis
  • Somdeb Sarkhel Adobe Research
  • Deepak Venugopal The University of Memphis

Keywords:

Markov Logic Networks, Probabilistic Graphical Models, Approximate Learning

Abstract

Weight learning is a challenging problem in Markov Logic Networks (MLNs) due to the large size of the ground propositional probabilistic graphical model that underlies the first-order representation of MLNs. Though more sophisticated weight learning methods that use lifted inference have been proposed, such methods can typically scale up only in the absence of evidence, namely in generative weight learning. In discriminative learning, where the evidence typically destroys symmetries, existing approaches are lacking in scalability. In this paper, we propose a novel, intuitive approach for learning MLNs discriminatively by utilizing approximate symmetries. Specifically, we reduce the size of the training database by clustering approximately symmetric atoms together and selecting a representative atom from each cluster. However, each choice made from the clusters induces a different distribution, increasing the uncertainty in our learned model. To reduce this uncertainty, we learn a finite mixture model by stacking the different distributions, where the parameters of the model are learned using an EM approach. Our results on several benchmarks show that our approach is much more scalable and accurate as compared to existing state-of-the-art MLN learning methods.

Downloads

Published

2018-04-26

How to Cite

Islam, M., Sarkhel, S., & Venugopal, D. (2018). Learning Mixtures of MLNs. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/12120

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty