A Multi-Task Self-Supervised Learning Framework for Scopy Images
Yuexiang Li, Jiawei Chen, Yefeng Zheng
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:11
The training of deep learning models requires large amount of training data. However, as the annotations of medical data are difficult to acquire, the quantity of annotated medical images is often not enough to well train the deep learning networks. In this paper, we propose a novel multi-task self-supervised learning framework, namely ColorMe, for the scopy images, which deeply exploits the rich information embedded in raw data and looses the demand of training data. The approach pre-trains neural networks on multiple proxy tasks, i.e., green to red/blue colorization and color distribution estimation, which are defined in terms of the prior-knowledge of scopy images. Compared to the train-from-scratch strategy, fine-tuning from these pre-trained networks leads to a better accuracy on various tasks -- cervix type classification and skin lesion segmentation.