Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization

Authors

  • Lin Zhu Shanghai Jiao Tong University
  • Xinbing Wang Shanghai Jiao Tong University
  • Chenghu Zhou Shanghai Jiao Tong University
  • Nanyang Ye Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v37i9.26355

Keywords:

ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Applications, CV: Language and Vision, CV: Multi-modal Vision, CV: Representation Learning for Vision, ML: Applications, ML: Deep Neural Network Algorithms, ML: Meta Learning, ML: Multimodal Learning, ML: Other Foundations of Machine Learning, ML: Representation Learning

Abstract

Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the performance suffers from overfitting on few-shot examples and OoD generalization errors. In this paper, leveraging a broader supervision source, we explore a novel Bayesian cross-modal image-text alignment learning method (Bayes-CAL) to address this issue. Specifically, the model is designed as only text representations are fine-tuned via a Bayesian modelling approach with gradient orthogonalization loss and invariant risk minimization (IRM) loss. The Bayesian approach is essentially introduced to avoid overfitting the base classes observed during training and improve generalization to broader unseen classes. The dedicated loss is introduced to achieve better image-text alignment by disentangling the causal and non-casual parts of image features. Numerical experiments demonstrate that Bayes-CAL achieved state-of-the-art OoD generalization performances on two-dimensional distribution shifts. Moreover, compared with CLIP-like models, Bayes-CAL yields more stable generalization performances on unseen classes. Our code is available at https://github.com/LinLLLL/BayesCAL.

Downloads

Published

2023-06-26

How to Cite

Zhu, L., Wang, X., Zhou, C., & Ye, N. (2023). Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11461-11469. https://doi.org/10.1609/aaai.v37i9.26355

Issue

Section

AAAI Technical Track on Machine Learning IV