Computationally-Efficient Vision Transformer For Medical Image Semantic Segmentation Via Dual Pseudo-Label Supervision
Ziyang Wang, Nanqing Dong, Irina Voiculescu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:00
in this article, we present a leap-forward expansion to the study of explainability in neural networks by considering explanations as answers to abstract reasoning-based questions. With P as the prediction from a neural network, these questions are `Why P?', `What if not P?', and `Why P, rather than Q?' for a given contrast prediction Q. The answers to these questions are observed correlations, counterfactuals, and contrastive explanations respectively. Together, these explanations constitute abductive reasoning. The term observed refers to the specific case of post-hoc explainability, when an explanatory technique explains the decision P after a trained neural network has made the decision. The primary advantage of viewing explanations through the lens of abductive reasoning-based questions is that explanations can be used as reasons while making decisions. The post-hoc field of explainability, that previously only justified decisions, becomes active by being involved in the decision making process and providing limited, but relevant and contextual interventions. The contributions of this article are: (i) realizing explanations as reasoning paradigms, (ii) providing a probabilistic definition of observed explanations and their completeness, (iii) creating a taxonomy for evaluation of explanations, and (iv) positioning gradient-based complete explanainability's replicability and reproducibility across multiple applications and data modalities.