Enhancing Fine-Grained Vision-Language Pretraining with Negative Augmented Samples

Authors

  • Yeyuan Wang Northwest Polytechnical University, Xi'an
  • Dehong Gao Northwest Polytechnical University, Xi'an
  • Lei Yi Alibaba Group
  • Linbo Jin Alibaba Group
  • Jinxia Zhang Southeast University
  • Libin Yang Northwest Polytechnical University, Xi'an
  • Xiaoyan Cai Northwest Polytechnical University, Xi'an

DOI:

https://doi.org/10.1609/aaai.v39i8.32869

Abstract

Existing Vision-Language Pretraining (VLP) methods have achieved remarkable improvements across a variety of vision-language tasks, confirming their effectiveness in capturing coarse-grained semantic correlations. However, their capability for fine-grained understanding, which is critical for many nuanced vision-language applications, remains limited. Prevailing VLP models often overlook the intricate distinctions in expressing different modal features and typically depend on the similarity of holistic features for cross-modal interactions. Moreover, these models directly align and integrate features from different modalities, focusing more on coarse-grained general representations, thus failing to capture the nuanced differences necessary for tasks demanding a more detailed perception. In response to these limitations, we introduce Negative Augmented Samples(NAS), a refined vision-language pretraining model that innovatively incorporates NAS to specifically address the challenge of fine-grained understanding. NAS utilizes a Visual Dictionary(VD) as a semantic bridge between visual and linguistic domains. Additionally, it employs a Negative Visual Augmentation(NVA) method based on the VD to generate challenging negative image samples. These samples deviate from positive samples exclusively at the token level, thereby necessitating that the model discerns the subtle disparities between positive and negative samples with greater precision. Comprehensive experiments validate the efficacy of NAS components and underscore its potential to enhance fine-grained vision-language comprehension.

Downloads

Published

2025-04-11

How to Cite

Wang, Y., Gao, D., Yi, L., Jin, L., Zhang, J., Yang, L., & Cai, X. (2025). Enhancing Fine-Grained Vision-Language Pretraining with Negative Augmented Samples. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8060–8068. https://doi.org/10.1609/aaai.v39i8.32869

Issue

Section

AAAI Technical Track on Computer Vision VII