SYNTHESIS OF ADVERSARIAL SAMPLES IN TWO-STAGE CLASSIFIERS
Ismail Alkhouri, George Atia, Alvaro Velasquez
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:30
Adversarial attacks can drastically reduce the accuracy and confidence level of classifiers while being imperceptible. Existing studies on the topic have largely focused on one-stage classifiers. In this paper, we study the robustness of two Two-Stage Hierarchical Classifier models, the flat and top-down hierarchical classifiers, termed FHC and TDHC respectively, to targeted and confidence reduction attacks. We formulate feasibility programs based on similarity and distance measures for the one-shot synthesis of adversarial examples, and devise a generative approach to the solution. In this approach, the adjustable parameters of a generative network are iteratively updated by optimizing loss functions for the dual objective of (i) low attack perceptibility and (ii) small distance from desired soft predictions. We demonstrate the performance of the proposed approach in terms of imperceptibilty and measures of attack success, and show it compares favorably with state-of-the-art techniques.