Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:08
27 Oct 2020

It has been shown that there exist small and image-independent perturbations, called universal perturbations, that can fool deep-learning-based classifiers, resulting in a significant decrease in classification accuracy. In this paper, we propose a novel method to compute more effective universal perturbations via enhanced projected gradient descent on targeted classifiers. By maximizing the original loss function of the targeted model, we update the adversarial example with backpropagation and optimize the perturbation by accumulating small updates on perturbed images consecutively. We generate our attack for several modern CNN classifiers using ImageNet and compare the attack performance with other state-of-the-art universal adversarial attack methods. Performance results show that our proposed adversarial attack method can achieve much higher fooling rates as compared to state-of-the-art universal adversarial attack methods and can realize good generalization on cross-model evaluation.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00