Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:02:00
21 Apr 2023

Multi-modal medical image analysis task with deep neural network (DNN) models has become an area of growing interest. While some works proposed to utilize significant “mismatch” between multi-modal medical images for stroke onset time diagnosis within 4.5 hours, few de voted to diagnosis on dataset with insignificant “mismatch”. We tried to promote the development of this problem and overcome some challenges in it. Specifically, we proposed Multi-modal Contrastive Representation Learning, namely MCRLe, which leverages momentum contrastive representation learning to learn “mismatch” between different modalities from the same subject. To achieve the best performance, it eliminates the bias generated during imaging process between modalities using a cross-modal registration technology, and enriches image data using a well-designed data augmentation procedure. We carried out extensive experiments to evaluate MCRLe using a dataset of stroke patients with 136 subjects, and made a validation on three backbone networks including 3D CNN, 3D ResNet-18 and 3D ResNet-50. Experimental results shows that MCRLe could improves the performance of DNN on stroke onset time diagnosis task, and it assists DNN in focusing more on stroke regions with “mis match” even without using segmentation results of lesion as an auxiliary. Results of cross validation and various backbone network settings further confirm the superiority of MCRLe.