Hardware-Oriented Shallow Joint Demosaicing and Denoising
Xiaodong Yang, Wengang Zhou, Houqiang Li
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:18
State-of-the-art Deep Neural Networks (DNNs) are promoting medical image processing. However, DNNs are susceptible to adversarial attacks, which could significantly deteriorate model performances and pose a threat to clinical diagnoses. One common method for prevention is through adversarial training, which is highly dependent on harnessing adversarial examples during the training stage. However, adversarial medical examples generated by many existing works are too perceptible to be adversarial examples. To improve the imperceptibility, we proposed a Regional Saliency Map Attack that generates an adversarial example by only perturbing a small number of pixels. Extensive experiments have shown that, on average, our method caused the same degradation in model performance by quantitatively less perceptible perturbations. Visualisations have also verified that the improvement in imperceptibility in an image is both global and regional.