Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise
Jhih-Cing Huang (National Taiwan University); Yu-Lin Tsai (National Yang Ming Chiao Tung University); Chao-Han Huck Yang (Georgia Institute of Technology ); Cheng-Fang Su (National Yang Ming Chiao Tung University); Chia-Mu Yu (National Yang Ming Chiao Tung University); Pin-Yu Chen (IBM Research); Sy-Yen Kuo (National Taiwan University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Quantum classifiers have been known to be vulnerable to adversarial attacks, where quantum classifiers are fooled by imperceptible noises to have misclassification. In this paper, we discover that by utilizing the added quantum random rotation noise can improve the robustness of quantum classifiers against adversarial attacks. We further connect the definition of differential privacy and demonstrate that the quantum classifier trained with natural presence of noise is differentially private. Lastly, we derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.