Skip to main content

PROMPT MAKES MASK LANGUAGE MODELS BETTER ADVERSARIAL ATTACKERS

He Zhu (Institute of Information Engineering,Chinese Academy of Sciences); Ce Li (Institute of Information Engineering,Chinese Academy of Sciences); haitian yang (Institute of Information Engineering,Chinese Academy of Sciences); Yan Wang (Institute of Information Engineering,Chinese Academy of Sciences); Weiqing Huang (Institute of Information Engineering, Chinese Academy of Sciences)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Generating high-quality synonymous perturbations is a core challenge for textual adversarial tasks. However, candidates generated from the masked language model often contain many antonyms or irrelevant words, which limit the perturbation space and affect the attack’s effectiveness. We present ProAttacker which uses Prompt to make the mask language models better adversarial Attackers. ProAttacker inverts the prompt paradigm by leveraging the prompt with the class label to guide the language model to generate more semantically-consistent perturbations. We present a systematic evaluation to analyze the attack performance on 6 NLP datasets, covering text classification and inference. Our experiments demonstrate that ProAttacker outperforms state-of-the-art attack strategies in both success rate and perturb rate.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00