Skip to main content

AN EMPIRICAL STUDY OF BACKDOOR ATTACKS ON MASKED AUTOENCODERS

Shuli Zhuang (University of Science and Technology of China); Pengfei Xia (University of Science and Technology of China); Bin Li (University of Science and Technology of China)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Large-scale unlabeled data has spurred recent progress in self-supervised learning methods for learning rich visual representations. Masked autoencoders (MAE), a recently proposed self-supervised method, has exhibited exemplary performance on many vision tasks by masking and reconstructing random patches of input. However, as a representation learning method, the backdoor pitfall of MAE, and its impact on downstream tasks, have not been fully investigated. In this paper, we use several common triggers to perform backdoor attacks on the pre-training phase of MAE and test them on downstream tasks. We explore some key factors such as trigger patterns and the number of poisoned samples. Some interesting results can be obtained. The pre-training process of MAE can be used to enhance the memory of the encoder for the trigger mode. The global trigger is easier than local triggers to attack the encoder. The blended ratio and patch size of the triggers have a great impact on MAE.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00