Latent Variable Model for Learning in Pairwise Markov Networks

Authors

  • Saeed Amizadeh University of Pittsburgh
  • Milos Hauskrecht University of Pittsburgh

DOI:

https://doi.org/10.1609/aaai.v24i1.7691

Keywords:

Pairwise Markov Networks, L1-regularization, Variational Approximation, Structural Priors

Abstract

Pairwise Markov Networks (PMN) are an important class of Markov networks which, due to their simplicity, are widely used in many applications such as image analysis, bioinformatics, sensor networks, etc. However, learning of Markov networks from data is a challenging task; there are many possible structures one must consider and each of these structures comes with its own parameters making it easy to overfit the model with limited data. To deal with the problem, recent learning methods build upon the L1 regularization to express the bias towards sparse network structures. In this paper, we propose a new and more flexible framework that let us bias the structure, that can, for example, encode the preference to networks with certain local substructures which as a whole exhibit some special global structure. We experiment with and show the benefit of our framework on two types of problems: learning of modular networks and learning of traffic networks models.

Downloads

Published

2010-07-03

How to Cite

Amizadeh, S., & Hauskrecht, M. (2010). Latent Variable Model for Learning in Pairwise Markov Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 382-387. https://doi.org/10.1609/aaai.v24i1.7691