CONTRASTIVE TRANSLATION LEARNING FOR MEDICAL IMAGE SEGMENTATION
Wankang Zeng, Wenkang Fan, Dongfang Shen, Yinran Chen, Xiongbiao Luo
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:19
Unsupervised domain adaptation commonly uses cycle generative networks to produce synthesis data from source to target domains. Unfortunately, translated samples cannot effectively preserve semantic information from input sources, resulting in bad or low adaptability of the network to segment target data. This work proposes an advantageous domain translation mechanism to improve the perceptual ability of the network for accurate unlabeled target data segmentation. Our domain translation employs patchwise contrastive learning to improve the semantic content consistency between input and translated images. Our approach was applied to unsupervised domain adaptation based abdominal organ segmentation. The experimental results demonstrate the effectiveness of our framework that outperforms other methods.