Robust Decision-Based Black-Box Adversarial Attack Via Coarse-To-Fine Random Search
Byeong Cheon Kim, Youngjoon Yu, Yong Man Ro
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:28
Many studies on reducing the adversarial vulnerability of deep neural networks have been published in the field of machine learning. To evaluate the actual robustness of networks, various adversarial attacks have been proposed. Most previous works have focused on white-box settings which assume that the adversary can have full access to the target models. Since they are not practical in real-world situations, recent studies on black-box attacks have received a lot of attention. However, existing black-box attacks have critical limitations, such as yielding a low attack success rate or relying too much on gradient estimation and decision boundaries. Those attacks are ineffective against weak defenses using gradient obfuscation. In this paper, we propose a novel gradient-free decision-based black-box attack using random search optimization. The proposed method only needs a hard-label (decision-based) and is effective against defenses using gradient obfuscation. Experimental results validate its query-efficiency and improved L2 distance.