Transfer Learning For Fundus Image Quality Assessment Using Discriminating Patches
Ammu R, Neelam Sinha
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:04:16
Automated screening of eye-related disorders requires diagnostic quality image inputs. Image quality may be degraded by factors such as improper light exposure, blurring, and artifacts. Hence, it is important to assure the diagnostic quality of the image for a reliable diagnosis. Differentiation of quality levels in fundus images should be based on depicting fine anatomical features faithfully. The relevant features that aid in quality assessment tend to be located in just a few regions of the image, called here as 'discriminative regions'. Hence, the challenge is to determine the most informative patches for model training. In this work, a transfer learning-based no-reference quality assessment method has been developed to determine the quality of the acquired fundus image. This is accomplished by identifying severity levels of blur and light exposure with a confidence score reflecting the certainty of prediction. We also propose a robust framework based on EM (Expectation-Maximization) method which focuses on discriminative patches to retain the most relevant information. We differentiate between discriminating and non-discriminating patches based on the level of detail in the image and the image quality is determined based only on the prediction probability of discriminative patches. The images used in training are synthetically distorted with various levels of light exposure and blur according to the proposed degradation model. A comparative study was performed with recent deep learning models and the approach has been tested on real images from multiple public databases. We also demonstrate that the discriminative patch-based model outperforms the simple patch-based model with a 5.17% improvement in sensitivity calculated for 310 images from EYEQ dataset.