Skip to main content

On Adversarial Robustness of Audio Classifiers

Kangkang Lu (A-STAR); Cuong Nguyen (Institute for Infocomm Research, ASTAR); Xun Xu (Institute for Infocomm Research, ASTAR); Chuan Sheng Foo (Institute for Infocomm Research, ASTAR)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

We make three contributions to improve adversarial robustness of audio classifiers. First, most existing works focus on ℓp-norm bounded adversarial perturbations. Instead, we consider signal-to-noise ratio (SNR) as a more natural measure of adversarial perturbations for audio data. We show that perturbed examples with a particular SNR can be generated using a corresponding ℓ2-norm perturbation, and establish the equivalence of these two metrics in assessing adversarial perturbations. This connection enables direct control of the SNR quality of perturbed examples and allows comparison using perturbations with different ℓp-norm constraints. Second, we are among the first to introduce APGD attack for adversarial training on audio data. In our experiments, APGD adversarial training is robust to adversarial attacks without compromising clean accuracy. Last, we improve adversarial robustness by adapting CutMix to audio - cutting and mixing two audio clips together - in conjunction with adversarial training, and observe improvements in robustness on US8K.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00