Relational Marginal Problems: Theory and Estimation

Authors

  • Ondřej Kuželka Cardiff University
  • Yuyi Wang ETH Zurich
  • Jesse Davis KU Leuven
  • Steven Schockaert Cardiff University

DOI:

https://doi.org/10.1609/aaai.v32i1.12113

Keywords:

artificial intelligence, relational learning

Abstract

In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature's number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.

Downloads

Published

2018-04-26

How to Cite

Kuželka, O., Wang, Y., Davis, J., & Schockaert, S. (2018). Relational Marginal Problems: Theory and Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12113

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty