Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 08:08
08 Jul 2020

The recent advances in adversarial attack techniques have witnessed the success of attacking high-quality CNN-based object detectors. However, in literature, the adversarial attack algorithms on object detection mainly focus on disturbing optimization objectives (i.e., classification and regression loss), which are sub-optimal due to ignoring contextual information. Novelly, we propose contextual adversarial perturbation (CAP) to attack the contextual information, which is more effective to degrade the mAP and recall of object detectors. Particularly, our CAP does not rely on ground-truth information to generate adversarial examples and thus leads to stronger generalization ability. Remarkably, we further design a contextual background loss and degrade the mAP and recall to almost 0.00%. Extensive experiments on PASCAL VOC and MS COCO datasets demonstrate the effectiveness of our attacks on both fully and weakly supervised object detectors.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00