SUPERRESOLUTION AND SEGMENTATION OF OCT SCANS USING MULTI-STAGE ADVERSARIAL GUIDED ATTENTION TRAINING
Paria Jeihouni, Omid Dehzangi, Nasser M. Nasrabadi, Annahita Amireskandari, Ali Dabouei, Ali Rezai
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:53
Optical coherence tomography (OCT) is one of the non-invasive and easy-to-acquire biomarkers being investigated to diagnose Alzheimer's disease (AD). Based on the current hypotheses, the thickness of the retinal layers, which is detectable within OCT scans, can be a promising biomarker for AD diagnosis. We have proposed the multi-stage & multi-discriminatory generative adversarial network (MultiSDGAN) to translate OCT scans in high-resolution segmentation labels. In this investigation, we aim to evaluate and compare various combinations of channel and spatial attention in multi-stages to extract better feature maps by capturing contextual relationships to improve segmentation performance. Furthermore, we incorporate guided attention in the generator by forcing the L-1 loss of a specifically designed binary mask on the attention masks and investigate its effectiveness in improving the final results. The ablation study results on our data set in five-fold cross-validation (5-CV) suggest that the proposed MultiSDGAN with a serial attention module provides the most competitive performance, and guiding the spatial attention feature maps by binary masks further improves the performance in our proposed network. Comparing the baseline model with adding the guided-attention, our results demonstrated relative improvements of 21.44% and 19.45% on the Dice coefficient and SSIM, respectively.