Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:00
03 Oct 2022

Backdoor attacks have been proved to be seriously threatening to deep neural networks. Many defending methods against backdoor attack have been proposed and reduced attack success rate significantly. However, most existing defending methods are empirical, and might be later broken by stronger attack methods. To avoid such a cat-and-mouse game, We proposed CRAB, a defense that can guarantee the robustness of an image classifier against poisoning-based backdoor attack with triggers bounded in a contiguous region. We analyze two ways of adding triggers: fixed-region and randomized-region. For fixed-region setting, we train a set of models on benign dataset for different image ablation positions and give robustness guarantee to both training and testing datasets. Whilst for random position triggers, we train a universal model on dataset with triggers, and give robustness guarantee to testing datasets. Our excellent experimental results demonstrate that CRAB exhibits strong robustness against patched backdoor attack, while maintaining comparable high clean accuracies.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00