Feature Fusion For Segmentation And Classification Of Skin Lesions
Zhang Yue, Zifan Chen, Hao Yu, Xinyu Yao, HONGFENG LI
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:04:26
Automated segmentation and classification of dermoscopy images are two crucial challenges for early detection of skin cancers. Deep models trained for individual task ignore the relationship of the two tasks and lack the diagnostic proposals or explanation for diagnosis results in real usage. We assume that features extracted with segmentation model and classification model trained on similar datasets are highly related and have potentials to boost each other when trained together. In this paper, we propose a Combined-Learning Network (CLNet) consisting of a classification network, a segmentation network and a feature fusion module for segmentation and classification of skin lesion. Particularly, the feature fusion module fuses the features extracted by the classification branch and segmentation branch and outputs fused features for the two tasks respectively. In this way, the information shared by the two branches can be fully exploited and the performance of two tasks can be mutually improved. Experimental results demonstrate that the proposed model can achieve promising performance on the public skin disease dataset.