Semi-supervised multimodality learning with Graph Convolutional Neural Networks for Disease Diagnosis
Yongxiang Huang, Albert C. S. Chung
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:29
There is a trend that digitalized clinical data increases dramatically every year. Part of data is multi-modal with imaging and non-imaging data such as phenotypic and genetic information. Though the success of CNNs has empowered a wide range of applications in learning from the imaging data, incorporating both the imaging and non-imaging data complementarily to improve the diagnostic quality is still challenging. To tackle this challenge, we propose a novel graph-convolutional model which is based on the proposed concept of edge adapter for learning an adaptive population graph from a multi-modal database. The edge adapter can be jointly optimized with the proposed graph convolutional neural network for semi-supervised node classification. Experimental results on two challenging multimodal medical databases demonstrate the potential of our method in learning from multi-modal data for disease diagnosis.