Defending Against Universal Attack via Curvature-aware Category Adversarial Training
Peilun Du, Xiaolong Zheng, Liang Liu, Huadong Ma
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:24
Adversarial training can defend against universal adversarial perturbations (UAP) by injecting corresponding adversarial samples during training. However, adversarial samples used by existing methods, such as UAP, inevitably include excessive perturbations related to other categories due to its inherent goal of universality. Training with them will cause more erroneous predictions with larger local positive curvature. In this paper, we propose a curvature-aware category adversarial training method to avoid excessive perturbations. We introduce the category-oriented adversarial masks that are synthesized with class distinctive momentum. Besides, we split the min-max optimization loops of adversarial training into two parallel processes to reduce the training cost. Experimental results on CIFAR-10 and ImageNet show that our method achieves better defense accuracy under UAP with less training cost than state-of-the-art baselines.