Neural Causal Abstractions

Authors

  • Kevin Xia Columbia University
  • Elias Bareinboim Columbia University

DOI:

https://doi.org/10.1609/aaai.v38i18.30044

Keywords:

RU: Causality, ML: Causal Learning, ML: Deep Generative Models & Autoencoders, ML: Representation Learning

Abstract

The ability of humans to understand the world in terms of cause and effect relationships, as well as their ability to compress information into abstract concepts, are two hallmark features of human intelligence. These two topics have been studied in tandem under the theory of causal abstractions, but it is an open problem how to best leverage abstraction theory in real-world causal inference tasks, where the true model is not known, and limited data is available in most practical settings. In this paper, we focus on a family of causal abstractions constructed by clustering variables and their domains, redefining abstractions to be amenable to individual causal distributions. We show that such abstractions can be learned in practice using Neural Causal Models, allowing us to utilize the deep learning toolkit to solve causal tasks (identification, estimation, sampling) at different levels of abstraction granularity. Finally, we show how representation learning can be used to learn abstractions, which we apply in our experiments to scale causal inferences to high dimensional settings such as with image data.

Published

2024-03-24

How to Cite

Xia, K., & Bareinboim, E. (2024). Neural Causal Abstractions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20585-20595. https://doi.org/10.1609/aaai.v38i18.30044

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty