Skip to main content

PIXEL-LEVEL AND AFFINITY-LEVEL KNOWLEDGE DISTILLATION FOR UNSUPERVISED SEGMENTATION OF COVID-19 LESIONS

Rui Xu, Yufeng Wang, Xinchen Ye, Pengcheng Wu, Yen-Wei Chen, Fangyi Xu, Wenchao Zhu, Chao Chen, Yong Zhou, Hongjie Hu, Xiaofeng Qu, Shoji Kido, Noriyuki Tomiyama

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:32
12 May 2022

Automatic segmentation of COVID-19 lesions is essential for computer-aided diagnosis. However, this task remains challenging because widely-used supervised based methods require large-scale annotated data that is difficult to obtain. Although an unsupervised method based on anomaly detection has shown promising results in [1], its performance is relatively poor. We address this problem by proposing a pixel-level and affinity-level knowledge distillation method. It obtains a pre-trained teacher network with rich semantic knowledge of CT images by constructing and training an auto-encoder at first, and then trains a student network with the same architecture as the teacher by distilling the teacher's knowledge only from normal CT images, and finally localizes COVID-19 lesions using the feature discrepancy between the teacher and the student networks. Besides, except for the traditional pixel-level distillation, we design the affinity-level distillation that takes into account the pairwise relationship of features to fully distill effective knowledge. We evaluate this method by using three different COVID-19 datasets and the experimental results show that the segmentation performance is largely improved when it is compared with the other existing unsupervised anomaly detection methods.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00