TWO-WAY FEATURE-ALIGNED AND ATTENTION-RECTIFIED ADVERSARIAL TRAINING
Haitao Zhang, Fan Jia, Quanxin Zhang, Yahong Han, Xiaohui Kuang, Yu-an Tan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:55
Adversarial training increases robustness by augmenting training data with adversarial examples. However, vanilla adversarial training may be overfitting to certain adversarial attacks. Small perturbations in images bring in error which is gradually amplified when forwarded through the model so that the error leads to wrong classification. Besides, small perturbations will also distract classifier's attention to significant features that are relevant to the true label. In this paper, we propose a novel two-way feature-aligned and attention-rectified adversarial training (FAAR) to improve adversarial training (AT).
FAAR utilizes two-way feature alignment and attention rectification to mitigate the problems mentioned above. FAAR effectively suppresses perturbations in low-level, high-level and global features by moving features of perturbed images towards those of clean images with two-way feature alignment. It also leads the model into focusing more on useful features which are correlated with true label through rectifying gradient-weighted attention. Besides, feature alignment activates attention rectification by reducing perturbations in high-level feature. Our proposed method FAAR surpasses other existing AT methods in three aspects. First, it pushes the model to keep invariant when dealing with different adversarial attacks and different magnitude of perturbations. Second, it can be applied to any convolution neural networks. Third, the training process is end-to-end. For experiments, FAAR shows promising defense performance on CIFAR-10 and ImageNet.
FAAR utilizes two-way feature alignment and attention rectification to mitigate the problems mentioned above. FAAR effectively suppresses perturbations in low-level, high-level and global features by moving features of perturbed images towards those of clean images with two-way feature alignment. It also leads the model into focusing more on useful features which are correlated with true label through rectifying gradient-weighted attention. Besides, feature alignment activates attention rectification by reducing perturbations in high-level feature. Our proposed method FAAR surpasses other existing AT methods in three aspects. First, it pushes the model to keep invariant when dealing with different adversarial attacks and different magnitude of perturbations. Second, it can be applied to any convolution neural networks. Third, the training process is end-to-end. For experiments, FAAR shows promising defense performance on CIFAR-10 and ImageNet.