Modeling the Probabilistic Distribution of Unlabeled Data for One-shot Medical Image Segmentation
Keywords:Biology & Cell microscopy, Applications
AbstractExisting image segmentation networks mainly leverage large-scale labeled datasets to attain high accuracy. However, labeling medical images is very expensive since it requires sophisticated expert knowledge. Thus, it is more desirable to employ only a few labeled data in pursuing high segmentation performance. In this paper, we develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation which exploits only one labeled MRI image (named atlas) and a few unlabeled images. In particular, we propose to learn the probability distributions of deformations (including shapes and intensities) of different unlabeled MRI images with respect to the atlas via 3D variational autoencoders (VAEs). In this manner, our method is able to exploit the learned distributions of image deformations to generate new authentic brain MRI images, and the number of generated samples will be sufficient to train a deep segmentation network. Furthermore, we introduce a new standard segmentation benchmark to evaluate the generalization performance of a segmentation network through a cross-dataset setting (collected from different sources). Extensive experiments demonstrate that our method outperforms the state-of-the-art one-shot medical segmentation methods. Our code has been released at https://github.com/dyh127/Modeling-the-Probabilistic-Distribution-of-Unlabeled-Data.
How to Cite
Ding, Y., Yu, X., & Yang, Y. (2021). Modeling the Probabilistic Distribution of Unlabeled Data for One-shot Medical Image Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1246-1254. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16212
AAAI Technical Track on Computer Vision I