CMVAE: Causal Meta VAE for Unsupervised Meta-Learning

Authors

  • Guodong Qi Zhejiang University ZJU-League Research & Development Center;
  • Huimin Yu Zhejiang University ZJU-League Research & Development Center State Key Lab of CAD&CG, Zhejiang University Zhejiang Provincial Key Laboratory of Information Processing, Communication and Networking

DOI:

https://doi.org/10.1609/aaai.v37i8.26135

Keywords:

ML: Meta Learning, ML: Unsupervised & Self-Supervised Learning, CV: Visual Reasoning & Symbolic Representations, CV: Object Detection & Categorization

Abstract

Unsupervised meta-learning aims to learn the meta knowledge from unlabeled data and rapidly adapt to novel tasks. However, existing approaches may be misled by the context-bias (e.g. background) from the training data. In this paper, we abstract the unsupervised meta-learning problem into a Structural Causal Model (SCM) and point out that such bias arises due to hidden confounders. To eliminate the confounders, we define the priors are conditionally independent, learn the relationships between priors and intervene on them with casual factorization. Furthermore, we propose Causal Meta VAE (CMVAE) that encodes the priors into latent codes in the causal space and learns their relationships simultaneously to achieve the downstream few-shot image classification task. Results on toy datasets and three benchmark datasets demonstrate that our method can remove the context-bias and it outperforms other state-of-the-art unsupervised meta-learning algorithms because of bias-removal. Code is available at https://github.com/GuodongQi/CMVAE.

Downloads

Published

2023-06-26

How to Cite

Qi, G., & Yu, H. (2023). CMVAE: Causal Meta VAE for Unsupervised Meta-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9480-9488. https://doi.org/10.1609/aaai.v37i8.26135

Issue

Section

AAAI Technical Track on Machine Learning III