A Triple-Stage Self-Guided Network for Kidney Tumor Segmentation
Xiaoshuai Hou, Xie Chunmei, Fengyi Li, Jiaping Wang, Chuanfeng Lv, Guotong Xie, Yang Nan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 11:05
The morphological characteristics of kidney tumor is crucial factor for radiologists to make accurate diagnosis and treatment. Unfortunately, performing quantitative study of the relationship between kidney tumor morphology and clinical outcomes is very difficult because kidney tumor varies dramatically in its size, shape, location, etc. Automatic semantic segmentation of kidney and tumor is a promising tool towards developing advanced surgical planning techniques. In this work, we present a triple-stage self-guided network for kidney tumor segmentation task. The low-resolution net can roughly locate the volume of interest (VOI) from down-sampled CT images, while the full-resolution net and tumor refine net can extract accurate boundaries of kidney and tumor within VOI from full resolution CT images. We innovatively propose dilated convolution blocks (DCB) to replace the traditional pooling operations in deeper layers of U-Net architecture to retain detailed semantic information better. Besides, a hybrid loss of dice and weighted cross entropy is used to guide the model to focus on voxels close to the boundary and hard to be distinguished. We evaluate our method on the KiTS19 (MICCAI 2019 Kidney Tumor Segmentation Challenge) test dataset and achieve 0.9674, 0.8454 average dice for kidney and tumor respectively, which ranked the 2nd place in the KiTS19 challenge.