Geometric Correspondence Constrained Pseudo-Label Alignment for Source-Free Domain Adaptive Fundus Image Segmentation

Authors

  • Zhouhongyuan Hu Sichuan University
  • Lei Zhang Sichuan University
  • Lituan Wang Sichuan University
  • Zhenwei Zhang Sichuan University
  • Minjuan Zhu Sichuan University
  • Zhenbin Wang Sichuan University

DOI:

https://doi.org/10.1609/aaai.v40i6.42502

Abstract

Source-free unsupervised domain adaptation (SF-UDA), which relies only on a pre-trained source model and unlabeled target data, has gained significant attention. Pseudo-labeling, valued for its simplicity and effectiveness, is a key approach in SF-UDA. However, existing methods neglect the consistency priors of anatomical features across samples, leading them fail to revise of high-confidence noise in structurally inconsistent regions, ultimately manifesting as significant discrepancies in pseudo-labeled samples especially in limited source data scenarios. Motivated by this insight, we propose a novel Geometric Correspondence Constrained (GCC) pseudo-labeling framework. GCC first stratifies pseudo-labeled samples into high/low-quality subsets. It then refines low-quality samples by leveraging the anatomical features inherent in high-quality samples while injecting Gaussian perturbation to perturb high-confidence noise towards the decision boundaries. This process effectively mitigates high-confidence noise disruptive effect and preserves critical prior anatomical knowledge, making it particularly powerful for scenarios with limited source data. Experiments on cross-domain fundus image datasets demonstrate that our method achieves state-of-the-art performance.

Downloads

Published

2026-03-14

How to Cite

Hu, Z., Zhang, L., Wang, L., Zhang, Z., Zhu, M., & Wang, Z. (2026). Geometric Correspondence Constrained Pseudo-Label Alignment for Source-Free Domain Adaptive Fundus Image Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(6), 4976–4984. https://doi.org/10.1609/aaai.v40i6.42502

Issue

Section

AAAI Technical Track on Computer Vision III