Separated Contrastive Learning for Organ-at-Risk and Gross-Tumor-Volume Segmentation with Limited Annotation

Authors

  • Jiacheng Wang Xiamen University
  • Xiaomeng Li The Hong Kong University of Science and Technology
  • Yiming Han Peking University
  • Jing Qin The Hong Kong Polytechnic University
  • Liansheng Wang Xiamen University
  • Zhou Qichao Manteia

DOI:

https://doi.org/10.1609/aaai.v36i3.20146

Keywords:

Computer Vision (CV), Machine Learning (ML)

Abstract

Automatic delineation of organ-at-risk (OAR) and gross-tumor-volume (GTV) is of great significance for radiotherapy planning. However, it is a challenging task to learn powerful representations for accurate delineation under limited pixel (voxel)-wise annotations. Contrastive learning at pixel-level can alleviate the dependency on annotations by learning dense representations from unlabeled data. Recent studies in this direction design various contrastive losses on the feature maps, to yield discriminative features for each pixel in the map. However, pixels in the same map inevitably share semantics to be closer than they actually are, which may affect the discrimination of pixels in the same map and lead to the unfair comparison to pixels in other maps. To address these issues, we propose a separated region-level contrastive learning scheme, namely SepaReg, the core of which is to separate each image into regions and encode each region separately. Specifically, SepaReg comprises two components: a structure-aware image separation (SIS) module and an intra- and inter-organ distillation (IID) module. The SIS is proposed to operate on the image set to rebuild a region set under the guidance of structural information. The inter-organ representation will be learned from this set via typical contrastive losses cross regions. On the other hand, the IID is proposed to tackle the quantity imbalance in the region set as tiny organs may produce fewer regions, by exploiting intra-organ representations. We conducted extensive experiments to evaluate the proposed model on a public dataset and two private datasets. The experimental results demonstrate the effectiveness of the proposed model, consistently achieving better performance than state-of-the-art approaches. Code is available at https://github.com/jcwang123/Separate_CL.

Downloads

Published

2022-06-28

How to Cite

Wang, J., Li, X., Han, Y., Qin, J., Wang, L., & Qichao, Z. (2022). Separated Contrastive Learning for Organ-at-Risk and Gross-Tumor-Volume Segmentation with Limited Annotation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2459-2467. https://doi.org/10.1609/aaai.v36i3.20146

Issue

Section

AAAI Technical Track on Computer Vision III