On Causally Disentangled Representations

Authors

  • Abbavaram Gowtham Reddy Indian Institute of Technology, Hyderabad
  • Benin Godfrey L Indian Institute of Technology, Hyderabad
  • Vineeth N Balasubramanian Indian Institute of Technology, Hyderabad

DOI:

https://doi.org/10.1609/aaai.v36i7.20781

Keywords:

Machine Learning (ML)

Abstract

Representation learners that disentangle factors of variation have already proven to be important in addressing various real world concerns such as fairness and interpretability. Initially consisting of unsupervised models with independence assumptions, more recently, weak supervision and correlated features have been explored, but without a causal view of the generative process. In contrast, we work under the regime of a causal generative process where generative factors are either independent or can be potentially confounded by a set of observed or unobserved confounders. We present an analysis of disentangled representations through the notion of disentangled causal process. We motivate the need for new metrics and datasets to study causal disentanglement and propose two evaluation metrics and a dataset. We show that our metrics capture the desiderata of disentangled causal process. Finally we perform an empirical study on state of the art disentangled representation learners using our metrics and dataset to evaluate them from causal perspective.

Downloads

Published

2022-06-28

How to Cite

Reddy, A. G., Godfrey L, B., & Balasubramanian, V. N. (2022). On Causally Disentangled Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 8089-8097. https://doi.org/10.1609/aaai.v36i7.20781

Issue

Section

AAAI Technical Track on Machine Learning II