Contextual Adversarial Attacks for Object Detection
Hantao Zhang, Wengang Zhou, Houqiang Li
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:08
The recent advances in adversarial attack techniques have witnessed the success of attacking high-quality CNN-based object detectors. However, in literature, the adversarial attack algorithms on object detection mainly focus on disturbing optimization objectives (i.e., classification and regression loss), which are sub-optimal due to ignoring contextual information. Novelly, we propose contextual adversarial perturbation (CAP) to attack the contextual information, which is more effective to degrade the mAP and recall of object detectors. Particularly, our CAP does not rely on ground-truth information to generate adversarial examples and thus leads to stronger generalization ability. Remarkably, we further design a contextual background loss and degrade the mAP and recall to almost 0.00%. Extensive experiments on PASCAL VOC and MS COCO datasets demonstrate the effectiveness of our attacks on both fully and weakly supervised object detectors.