A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation
Tongxue Zhou, Su Ruan, Yu Guo, Stphane Canu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:42
Brain tumor segmentation in magnetic resonance images (MRI) is necessary for diagnosis, monitoring and treatment, while manual segmentation is time-consuming, labor-intensive and subjective. In addition, single modality can?t provide enough information for accurate segmentation. In this paper, we propose a multi-modality fusion network based on attention mechanism for brain tumor segmentation. Our network includes four channel-independent encoding paths to independently extract features from four modalities, the feature fusion block to fuse the four features, and a decoding path to finally segment the tumor. The channel-independent encoding path can capture modality-specific features, However, not all the features extracted from the encoders are useful for segmentation. In this paper, we propose to use the attention mechanism to guide the fusion block. In this way, the modality-specific features can be separately recalibrated along the channel and space paths, which can suppress less informative features and emphasize the useful ones. The obtained shared latent feature representation is finally projected by the decoder to the brain tumor segmentation. The experiment results on BraTS 2017 dataset demonstrate the effectiveness of our proposed method.