Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:12:18
05 Oct 2022

State-of-the-art Deep Neural Networks (DNNs) are promoting medical image processing. However, DNNs are susceptible to adversarial attacks, which could significantly deteriorate model performances and pose a threat to clinical diagnoses. One common method for prevention is through adversarial training, which is highly dependent on harnessing adversarial examples during the training stage. However, adversarial medical examples generated by many existing works are too perceptible to be adversarial examples. To improve the imperceptibility, we proposed a Regional Saliency Map Attack that generates an adversarial example by only perturbing a small number of pixels. Extensive experiments have shown that, on average, our method caused the same degradation in model performance by quantitatively less perceptible perturbations. Visualisations have also verified that the improvement in imperceptibility in an image is both global and regional.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00