Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:55
03 Oct 2022

Deep neural networks are vulnerable to adversarial perturbation. Recent researches indicate that misclassification may result from the distribution mismatch between adversarial examples and clean images. inspired by a common consensus that the human neural representation is sparse and redundant, we propose an input-transformation-based defense method based on sparse representation to bridge the distribution mismatch. in our method, we first learn a global overcomplete dictionary from a set of image patches extracted from training images, and then, purify input images before feeding them into the neural networks, using the sparse representation technique with the learned dictionary. Compared with other attack-agnostic defenses, our method obtains comparable results on CIFAR-10 and ImageNet in various attack settings.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00