Scores for Learning Discrete Causal Graphs with Unobserved Confounders

Authors

  • Alexis Bellot Google DeepMind
  • Junzhe Zhang Columbia University
  • Elias Bareinboim Columbia University

DOI:

https://doi.org/10.1609/aaai.v38i10.28980

Keywords:

ML: Causal Learning

Abstract

Structural learning is arguably one of the most challenging and pervasive tasks found throughout the data sciences. There exists a growing literature that studies structural learning in non-parametric settings where conditional independence constraints are taken to define the equivalence class. In the presence of unobserved confounders, it is understood that non-conditional independence constraints are imposed over the observational distribution, including certain equalities and inequalities between functionals of the joint distribution. In this paper, we develop structural learning methods that leverage additional constraints beyond conditional independences. Specifically, we first introduce a score for arbitrary graphs combining Watanabe's asymptotic expansion of the marginal likelihood and new bounds over the cardinality of the exogenous variables. Second, we show that the new score has desirable properties in terms of expressiveness and computability. In terms of expressiveness, we prove that the score captures distinct constraints imprinted in the data, including Verma's and inequalities'. In terms of computability, we show properties of score equivalence and decomposability, which allows, in principle, to break the problem of structural learning in smaller and more manageable pieces. Third, we implement this score using an MCMC sampling algorithm and test its properties in several simulation scenarios.

Downloads

Published

2024-03-24

How to Cite

Bellot, A., Zhang, J., & Bareinboim, E. (2024). Scores for Learning Discrete Causal Graphs with Unobserved Confounders. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11043-11051. https://doi.org/10.1609/aaai.v38i10.28980

Issue

Section

AAAI Technical Track on Machine Learning I