Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 09 Oct 2023

In backdoor attacks, attackers usually implant the backdoor into the target model during the training phase by manipulating some training samples with backdoor triggers. The state-of-the-art backdoor attack is Sample-Specific Backdoor Attack. Although this attack can bypass most existing backdoor defenses, it can be detected by checking the mapping relationship between samples and their labels. In this paper, we propose a new backdoor attack called Clean Label Sample-Specific Backdoor Attack (CSSBA). We use advanced deep steganography to hide the trigger in source images from training dataset and obtain the backdoor images. Then we generate the poisoned images similar to the backdoor images in feature space without changing their original labels, and the poisoned images are used to train the target model and implant the backdoor. We demonstrate the effectiveness of our method with good invisibility on different datasets and models.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00