Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:22
18 Oct 2022

Deep neural networks have shown outstanding performance in various areas, but adversarial examples can easily fool them. Although strong adversarial attacks have defeated diverse adversarial defense methods, adversarial training, which augments training data with adversarial examples, remains an effective defense strategy. To further improve adversarial robustness, this paper exploits adversarial examples of adversarial examples. We observe that these doubly adversarial examples tend to return to the original prediction on the clean images but sometimes drift toward other classes. From this finding, we propose a regularization loss that prevents these drifts, which mitigates the vulnerability against multi-targeted attacks. Experimental results on the CIFAR-10 and CIFAR-100 datasets empirically show that the proposed loss improves adversarial robustness.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00