Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:03
21 Sep 2020

Adversarial perturbation attacks were shown to inflict severe damage on simple one-stage classifiers. In this paper, we examine the vulnerability of Hierarchical Composite Classifiers to such attacks. We formulate a maximin program to generate perturbations attacking these models, and obtain an approximate solution based on a convex relaxation of the proposed program. With the proposed approach, the relative loss in classification accuracy for the super-labels decreases drastically in comparison to perturbations generated for One Stage Composite Classifiers. Additionally, we show that fooling a classifier about the `big picture' is generally more perceptible.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00