Skip to main content

Measuring the Transferability of L-infty Attacks by the L-2 Norm

Sizhe Chen (Shanghai Jiao Tong University); Qinghua Tao (KU Leuven); Zhixing Ye (Shanghai Jiao Tong University); Xiaolin Huang (Shanghai Jiao Tong University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
09 Jun 2023

Deep neural networks could be fooled by adversarial examples with trivial differences to original samples. To keep the difference imperceptible in human eyes, researchers bound the adversarial perturbations by the L-infty norm, which is now commonly served as the standard to align the strength of different attacks for a fair comparison. However, we propose that using the L-infty norm alone is not sufficient in measuring the attack strength, because even with a fixed L-infty distance, the L-2 distance also greatly affects the attack transferability between models. Through the discovery, we reach more in-depth understandings towards the attack mechanism. Since larger perturbations naturally lead to better transferability, we thereby advocate that the strength of attacks should be simultaneously measured by both the L-infty and L-2 norm. Our proposal is firmly supported by extensive experiments on ImageNet dataset from 7 attacks, 4 white-box models, and 9 black-box models.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00