BOOSTING TRANSFERABILITY OF ADVERSARIAL EXAMPLE VIA AN ENHANCED EULER'S METHOD
Anjie Peng (Southwest University of Science and Technology); Zhi Lin (Southwest University of Science and Technology); Hui Zeng (Southwest University of Science and Technology); Wenxin Yu (Southwest University of Science and Technology); Xiangui Kang (Sun Yat-Sen University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Adversarial examples are intentionally designed images to force convolution neural networks to give error classification outputs. Existing attacks have constructed transferable adversarial examples from the base attack algorithm, data augmentation, ensemble model, etc. Nevertheless, under the black-box case especially facing defense models, the transferability of adversarial examples still needs to be improved. In this paper, we try to develop a better base attack to boost the transferability of adversarial examples. Through analyzing the baseline gradient-based attacks, we found their iterative procedures of updating gradients are similar to numerical Euler's methods. From the perspective of numerical analysis, we employ an enhanced Euler's method, with less approximate errors and thus more accurate, to search a better approximate optimal solution to construct a more transferable gradient-based attack. To this end, we apply two-step gradient calculations of the enhanced Euler's method to correct gradient descent directions. As a base attack, our attacks can be easily integrated with data augmentations and ensemble model augmentations. Experimental results show the proposed augmented attack significantly improves the transferability of adversarial examples and achieves an average attack success rate at least 3% higher than state-of-the-arts under black-box settings with defense mechanisms.