Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:30
03 Apr 2020

Learning from the multimodal brain imaging data attracts a large amount of attention in medical image analysis due to the proliferation of multimodal data collection. It is widely accepted that multimodal data can provide complementary information than mining from a single modality. However, unifying the image-based knowledge from the multimodal data is very challenging due to different image signals, resolution, data structure, etc.. In this study, we design a supervised deep model to jointly analyze brain morphometry and functional connectivity on the cortical surface and we name it deep multimodal brain network learning (DMBNL). Two graph-based kernels, i.e., geometry-aware surface kernel (GSK) and topology-aware network kernel (TNK), are proposed for processing the cortical surface morphometry and brain functional network. The vertex features on the cortical surface from GSK is pooled and feed into TNK as its initial regional features. In the end, the graph-level feature is computed for each individual and thus can be applied for classification tasks. We test our model on a large autism imaging dataset. The experimental results prove the effectiveness of our model.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00