Parallel Partitioning: Path Reducing and Union?Find Based Watershed For The Gpu
Yeva Gabrielyan, Varduhi Yeghiazaryan, Irina Voiculescu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:22
Deep neural networks have shown outstanding performance in various areas, but adversarial examples can easily fool them. Although strong adversarial attacks have defeated diverse adversarial defense methods, adversarial training, which augments training data with adversarial examples, remains an effective defense strategy. To further improve adversarial robustness, this paper exploits adversarial examples of adversarial examples. We observe that these doubly adversarial examples tend to return to the original prediction on the clean images but sometimes drift toward other classes. From this finding, we propose a regularization loss that prevents these drifts, which mitigates the vulnerability against multi-targeted attacks. Experimental results on the CIFAR-10 and CIFAR-100 datasets empirically show that the proposed loss improves adversarial robustness.