Prototype Knowledge Distillation for Medical Segmentation with Missing Modality
Shuai Wang (Tsinghua University); Zipei Yan (The Hong Kong Polytechnic University); Daoan Zhang (Southern University of Science and Technology); Haining Wei (Tsinghua University); Zhongsen Li (Tsinghua University); Rui Li (Tsinghua University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Multi-modality medical imaging is crucial in clinical treatment
as it can provide complementary information for medical
image segmentation. However, collecting multi-modal
data in clinical is difficult due to the limitation of the scan time
and other clinical situations. As such, it is clinically meaningful
to develop an image segmentation paradigm to handle
this missing modality problem. In this paper, we propose a
prototype knowledge distillation (ProtoKD) method to tackle
the challenging problem, especially for the toughest scenario
when only single modal data can be accessed. Specifically,
our ProtoKD can not only distillate the pixel-wise knowledge
of multi-modality data to single-modality data but also transfer
intra-class and inter-class feature variations, such that the
student model could learn more robust feature representation
from the teacher model and inference with only one single
modality data. Our method achieves state-of-the-art performance
on BraTS benchmark.