Combining Multimodal Information for Metal Artefact Reduction: An Unsupervised Deep Learning Framework
Marta Ranzini, Irme Groothuis, Kerstin Klser, Manuel Jorge Cardoso, Johann Henckel, Sebastien Ourselin, Alister Hart, Marc Modat
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 15:05
Metal artefact reduction (MAR) techniques aim at removing metal-induced noise from clinical images. In Computed Tomography (CT), supervised deep learning approaches have been shown effective but limited in generalisability, as they mostly rely on synthetic data. In Magnetic Resonance Imaging (MRI) instead, no method has yet been introduced to correct the susceptibility artefact, still present even in MAR-specific acquisitions. In this work, we hypothesise that a multimodal approach to MAR would improve both CT and MRI. Given their different artefact appearance, their complementary information can compensate for the corrupted signal in either modality. We thus propose an unsupervised deep learning method for multimodal MAR. We introduce the use of Locally Normalised Cross Correlation as a loss term to encourage the fusion of multimodal information. Experiments show that our approach favours a smoother correction in the CT, while promoting signal recovery in the MRI.