Adversarial Attacks On Coarse-To-Fine Classifiers
Ismail Alkhouri, George Atia
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:32
Adversarial attacks have exposed the vulnerability of one-stage classifiers to carefully crafted perturbations which were shown to drastically alter their predictions while remaining imperceptible. In this paper, we examine the susceptibility of coarse-to-fine hierarchical classifiers to such types of attacks. We formulate convex programs to generate perturbations attacking these models and propose a generic solution based on the Alternating Direction Method of Multipliers (ADMM). We evaluate the performance of the proposed models using the degradation in classification accuracy and imperceptibility measures in comparison to perturbations generated to fool one-stage classifiers.
Chairs:
David Luengo