Adversarial Defense For Automatic Speaker Verification By Cascaded Self-Supervised Learning Models
Haibin Wu, Xu Li, Andy Liu, Zhiyong Wu, Helen Meng, Hung-yi Lee
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:02
Automatic speaker verification (ASV) is one of the core technologies in biometric identification. With the ubiquitous usage of ASV systems in safety-critical applications, more and more malicious attackers attempt to launch adversarial attacks at ASV systems. In the midst of the arms race between attack and defense in ASV, how to effectively improve the robustness of ASV against adversarial at-tacks remains an open question. We note that the self-supervised learning models possess the ability to mitigate superficial perturbations in the input after pretraining. Hence, with the goal of effective defense in ASV against adversarial attacks, we propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations. Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks in scenarios where attackers may either be aware or unaware of the self-supervised learning model.
Chairs:
Takafumi Koshinaka