Revealing Perceptible Backdoors in DNNs, Without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang,David Miller,Hang Wang,George Kesidis
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:42
Recently, a backdoor data poisoning attack was proposed, which adds mislabeled examples to the training set, with an embedded backdoor pattern, aiming to have the classifier learn to classify to a target class whenever the backdoor pattern is present in a test sample. We address post-training detection of innocuous perceptible backdoors in DNN image classifiers, wherein the defender does not have access to the poisoned training set. This problem is challenging because without the poisoned training set, we have no hint about the actual backdoor pattern used during training. We identify two properties of perceptible backdoor patterns ? spatial invariance and robustness ? based upon which we propose a novel detector using the maximum achievable misclassification fraction (MAMF) statistic. We detect whether the trained DNN has been backdoor-attacked and infer the source and target classes. Our detector outperforms other existing detectors experimentally.