Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:02:16
21 Apr 2023

Whole-brain three dimensional (3D) nucleus instance segmentation is vital to quantify regional variations in many neuroscience studies using tissue clearing and microscopy imaging technologies. Due to the large size of whole-brain microscopy images ($\mu m$ vs. $mm$ resolution in MRI leads voxel volume increasing in $10^9$ level), however, it is computationally challenging to train an end-to-end deep model that can recognize nuclei instances in the full 3D volume. Instead, it is common practice to first segment 2D instances in each slice and then assemble them into 3D instances. Moreover, the whole-brain segmentation often comprises a collection of nucleus instance segmentation results in pre-partitioned image stacks, each with a manageable size for applying deep models. Complex arrangements of nuclei in close proximity makes stitching together 2D nucleus segmentations across slices non-trivial, leading to inconsistent segmentation along inter-slice and cross-stack gaps, which undermines the nucleus instance segmentation accuracy of current state-of-the-art deep models. To address this challenge we present a flexible learning-based stitching component which can be either integrated into existing deep models or used as a post-processing step. The backbone is a contextual graph model which is trained to predict the one-to-one correspondence between 2D segmentations along the gap. We have evaluated the performance of our stitching model for 3D nucleus instance segmentation from light-sheet microscopy images of mouse brains. After integrating our stitching model into existing methods, a significant improvement in instance segmentation accuracy is achieved by alleviating the inconsistency issue across discontinuous slices and stacks.

More Like This