Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:09
20 Sep 2021

In this paper, we focus on detecting adversarial images generated by the white-box adversarial attack proposed by Carlini and Wagner (C&W for short). The C&W attack is one of the most powerful attacks which has achieved nearly 100% attack success rates for fooling deep neural network (DNN) yet keeping the visual quality of adversarial image. Considering that the C&W attack optimizes a loss function based on the logit layer of DNN to find adversarial perturbations, we first add Gaussian noise to destroy the perturbations. For the high-confidence adversarial image, a strong Gaussian noise is employed. In order to reduce the impact of such strong noise on a legitimate image, a FFDNet filter is utilized to execute denoising. By comparing the prediction on a test image with that on its noise added-then-denoised version, the proposed method detects the test image as adversarial when the predictions are different. The experiments on ImageNet show that the proposed method can effectively detect targeted and un-targeted C&W adversarial images generated on famous models: Resnet-50, Inception v2, and Inception v3, achieving higher F1 scores than the-state-of-art.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00