CCQ: Cross-Class Query Network for Partially Labeled Organ Segmentation
DOI:
https://doi.org/10.1609/aaai.v37i2.25264Keywords:
CV: Medical and Biological ImagingAbstract
Learning multi-organ segmentation from multiple partially-labeled datasets attracts increasing attention. It can be a promising solution for the scarcity of large-scale, fully labeled 3D medical image segmentation datasets. However, existing algorithms of multi-organ segmentation on partially-labeled datasets neglect the semantic relations and anatomical priors between different categories of organs, which is crucial for partially-labeled multi-organ segmentation. In this paper, we tackle the limitations above by proposing the Cross-Class Query Network (CCQ). CCQ consists of an image encoder, a cross-class query learning module, and an attentive refinement segmentation module. More specifically, the image encoder captures the long-range dependency of a single image via the transformer encoder. Cross-class query learning module first generates query vectors that represent semantic concepts of different categories and then utilizes these query vectors to find the class-relevant features of image representation for segmentation. The attentive refinement segmentation module with an attentive skip connection incorporates the high-resolution image details and eliminates the class-irrelevant noise. Extensive experiment results demonstrate that CCQ outperforms all the state-of-the-art models on the MOTS dataset, which consists of seven organ and tumor segmentation tasks. Code is available at https://github.com/Yang-007/CCQ.git.Downloads
Published
2023-06-26
How to Cite
Liu, X., Wen, B., & Yang, S. (2023). CCQ: Cross-Class Query Network for Partially Labeled Organ Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1755-1763. https://doi.org/10.1609/aaai.v37i2.25264
Issue
Section
AAAI Technical Track on Computer Vision II