SAGE: Spuriousness-Aware Guided Prompt Exploration for Mitigating Multimodal Bias

Authors

  • Wenqian Ye University of Virginia
  • Di Wang University of Virginia
  • Guangtao Zheng Accenture
  • Bohan Liu University of Virginia
  • Aidong Zhang University of Virginia

DOI:

https://doi.org/10.1609/aaai.v40i33.40003

Abstract

Large vision-language models such as CLIP have shown strong zero-shot classification performance by aligning images and text in a shared embedding space. However, CLIP models often develop multimodal spurious biases, the undesirable tendency to rely on spurious features. For example, CLIP may infer object types in images based on frequently co-occurring backgrounds rather than the object's core features. This bias significantly impairs the robustness of pre-trained CLIP models on out-of-distribution data, where such cross-modal associations no longer hold. Existing methods for mitigating multimodal spurious bias typically require fine-tuning on downstream data or prior knowledge of the bias, which undermines the out-of-the-box usability of CLIP. In this paper, we first theoretically analyze the impact of multimodal spurious bias in zero-shot classification. Based on this insight, we propose Spuriousness-Aware Guided Exploration (SAGE), a simple and effective method that mitigates spurious bias via guided prompt selection. SAGE requires no training, fine-tuning, or external annotations. It explores on a space of prompt templates and selects the prompts that induces the largest semantic separation between classes, thereby improving worst-group robustness. Extensive experiments on four real-world benchmark datasets and five popular backbone models demonstrate that SAGE consistently improves zero-shot performance and generalization, outperforming previous zero-shot approaches without any external knowledge or model updates.

Downloads

Published

2026-03-14

How to Cite

Ye, W., Wang, D., Zheng, G., Liu, B., & Zhang, A. (2026). SAGE: Spuriousness-Aware Guided Prompt Exploration for Mitigating Multimodal Bias. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 27809-27817. https://doi.org/10.1609/aaai.v40i33.40003

Issue

Section

AAAI Technical Track on Machine Learning X