Skip to main content

INFORMATION DISTRIBUTION BASED DEFENSE AGAINST PHYSICAL ATTACKS ON OBJECT DETECTION

Guangzhi Zhou, Hongchao Gao, Peng Chen, Jin Liu, Jiao Dai, Jizhong Han, Ruixuan Li

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:35
10 Jul 2020

Recently, physical attacks launch a new challenge to the security of deep neural networks (DNNs) by generating physical world adversarial patches to attack DNNs based applications. The information distribution contained in the adversarial patch is different from that in the real image patches. In this paper, we propose a general defense method to effectively prevent such attacks. This method consists of an entropy-based proposal component and a gradient-based filtering component. Each component of our method can be viewed as preprocessing of adversarial images. Processed images are then run through the unmodified detectors, making our method agnostic to both the detectors and the attacks. Moreover, our method is based on traditional image processing rather than DNNs, so it does not require a great quantity of training data. Extensive experiments on different datasets indicate that our method is able to defend against physical attack on object detection effectively, increasing mAP from 31.3% to 53.8% for Pascal VOC 2007 and from 19.0% to 40.3% for Inria, and has better transferability, which can defend against different physical attacks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00