Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing

Authors

  • Qihua Chen University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Xuejin Chen University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Chenxuan Wang University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Yixiong Liu University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Zhiwei Xiong University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Feng Wu University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center

DOI:

https://doi.org/10.1609/aaai.v38i2.27879

Keywords:

CV: Medical and Biological Imaging, CV: Segmentation, CV: Multi-modal Vision, CV: 3D Computer Vision

Abstract

The current neuron reconstruction pipeline for electron microscopy (EM) data usually includes automatic image segmentation followed by extensive human expert proofreading. In this work, we aim to reduce human workload by predicting connectivity between over-segmented neuron pieces, taking both microscopy image and 3D morphology features into account, similar to human proofreading workflow. To this end, we first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain, which is three orders of magnitude larger than existing datasets for neuron segment connection. To learn sophisticated biological imaging features from the connectivity annotations, we propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding. The learned embeddings can be easily incorporated with any point or voxel-based morphological representations for automatic neuron tracing. Extensive comparisons of different combination schemes of image and morphological representation in identifying split errors across the whole fly brain demonstrate the superiority of the proposed approach, especially for the locations that contain severe imaging artifacts, such as section missing and misalignment. The dataset and code are available at https://github.com/Levishery/Flywire-Neuron-Tracing.

Published

2024-03-24

How to Cite

Chen, Q., Chen, X., Wang, C., Liu, Y., Xiong, Z., & Wu, F. (2024). Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 1174–1182. https://doi.org/10.1609/aaai.v38i2.27879

Issue

Section

AAAI Technical Track on Computer Vision I