Automatic Parameter Tying: A New Approach for Regularized Parameter Learning in Markov Networks

Authors

  • Li Chou The University of Texas at Dallas
  • Pracheta Sahoo The University of Texas at Dallas
  • Somdeb Sarkhel Adobe Research
  • Nicholas Ruozzi The University of Texas at Dallas
  • Vibhav Gogate The University of Texas at Dallas

DOI:

https://doi.org/10.1609/aaai.v32i1.11785

Keywords:

Markov networks, parameter learning, regularization

Abstract

Parameter tying is a regularization method in which parameters (weights) of a machine learning model are partitioned into groups by leveraging prior knowledge and all parameters in each group are constrained to take the same value. In this paper, we consider the problem of parameter learning in Markov networks and propose a novel approach called automatic parameter tying (APT) that uses automatic instead of a priori and soft instead of hard parameter tying as a regularization method to alleviate overfitting. The key idea behind APT is to set up the learning problem as the task of finding parameters and groupings of parameters such that the likelihood plus a regularization term is maximized. The regularization term penalizes models where parameter values deviate from their group mean parameter value. We propose and use a block coordinate ascent algorithm to solve the optimization task. We analyze the sample complexity of our new learning algorithm and show that it yields optimal parameters with high probability when the groups are well separated. Experimentally, we show that our method improves upon L2 regularization and suggest several pragmatic techniques for good practical performance.

Downloads

Published

2018-04-29

How to Cite

Chou, L., Sahoo, P., Sarkhel, S., Ruozzi, N., & Gogate, V. (2018). Automatic Parameter Tying: A New Approach for Regularized Parameter Learning in Markov Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11785