CO-OPERATIVE CNN FOR VISUAL SALIENCY PREDICTION ON WCE IMAGES
George Dimas (Department of Computer Science and Biomedical Informatics, University of Thessaly, Greece); Anastasios Koulaouzidis (The Royal Infirmary of Edinburgh); Dimitris K Iakovidis (Department of Computer Science and Biomedical Informatics, University of Thessaly, Greece)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The physician’s experience is highly correlated with the content interpretation of medical images. Over time, physicians develop their ability to examine the images, and this is usually reflected on gaze patterns they follow to observe visual cues that lead them to diagnostic decisions. In the context of gaze prediction, graph and machine learning methods have been proposed for the visual saliency estimation on generic images. In this work we preset a novel and robust gaze estimation methodology based on physicians’ eye fixations, using convolutional neural networks (CNNs) trained according to a novel co-operative scheme, on medical images acquired during Wireless Capsule Endoscopy (WCE). The proposed training approach considers both the reconstruction accuracy of the estimated saliency maps, and their contribution to the classification process of normal and abnormal findings. The model that was trained with the proposed co-operative procedure was able to achieve an average score of 0.76 Judd’s Area Under the receiver operating Characteristic (AUC-J).