Looking in the Right Place for Anomalies: Explainable AI through Automatic Location Learning
Satyananda Kashyap, Alexandros Karargyris, Joy Tzung-yu Wu, Yaniv Gur, Arjun Sharma, Ken C. L. Wong, Mehdi Moradi, Tanveer Syeda-Mahmood
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:53
Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.