Enhancing Adversarial Robustness For Image Classification By Regularizing Class Level Feature Distribution
Cheng Yu, Youze Xue, Jiansheng Chen, Yu Wang, Huimin Ma
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:11
Recent researches have shown that deep neural networks (DNNs) are vulnerable to adversarial examples. Adversarial training is practically the most effective approach to improve the robustness of DNNs against adversarial examples. However, conventional adversarial training methods only focus on the classification results or the instance level relationship on feature representations for adversarial examples. Inspired by the fact that adversarial examples break the distinguishability of the feature representations of DNNs for different classes, we propose Intra and Inter Class Feature Regularization (I2FR) to make the feature distribution of adversarial examples maintain the same classification property as clean examples. On the one hand, the intra-class regularization restricts the distance of features between adversarial examples and both the corresponding clean data and samples for the same class. On the other hand, the inter-class regularization prevents the feature of adversarial examples from getting close to other classes. By adding I2FR in both adversarial example generation and model training steps in adversarial training, we can get stronger and more diverse adversarial examples, and the neural network learns a more distinguishable and reasonable feature distribution. Experiments on various adversarial training frameworks demonstrate that I2FR is adaptive for multiple training frameworks and outperforms the state-of-the-art methods for classification of both clean data and adversarial examples.