FaNe: Towards Fine-Grained Cross-Modal Contrast with False-Negative Reduction and Text-Conditioned Sparse Attention

Authors

  • Peng Zhang Shenzhen University
  • Zhihui Lai Shenzhen University
  • Wenting Chen Stanford University
  • Xu Wu Shenzhen University
  • Heng Kong Baoan Central Hospital of Shenzhen

DOI:

https://doi.org/10.1609/aaai.v40i15.38264

Abstract

Medical vision-language pre-training (VLP) offers significant potential for advancing medical image understanding by leveraging paired image-report data. However, existing methods are limited by False Negatives (FaNe) induced by semantically similar texts and insufficient fine-grained cross-modal alignment. To address these limitations, we propose FaNe, a semantic-enhanced VLP framework. To mitigate false negatives, we introduce a semantic-aware positive pair mining strategy based on text-text similarity with adaptive normalization. Furthermore, we design a text-conditioned sparse attention pooling module to enable fine-grained image-text alignment through localized visual representations guided by textual cues. To strengthen intra-modal discrimination, we develop a hard-negative aware contrastive loss that adaptively reweights semantically similar negatives. Extensive experiments on five downstream medical imaging benchmarks demonstrate that FaNe achieves state-of-the-art performance across image classification, object detection, and semantic segmentation, validating the effectiveness of our framework.

Downloads

Published

2026-03-14

How to Cite

Zhang, P., Lai, Z., Chen, W., Wu, X., & Kong, H. (2026). FaNe: Towards Fine-Grained Cross-Modal Contrast with False-Negative Reduction and Text-Conditioned Sparse Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 12681–12689. https://doi.org/10.1609/aaai.v40i15.38264

Issue

Section

AAAI Technical Track on Computer Vision XII