Skip to main content

Intelligent And Adaptive Mixup Technique For Adversarial Robustness

Akshay Agarwal, Mayank Vatsa, Richa Singh, Nalini Ratha

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:12:40
21 Sep 2021

Deep neural networks are generally trained using large amounts of data to achieve state-of-the-art accuracy in many possible computer vision and image analysis applications ranging from object recognition to natural language processing. It is also claimed that these networks can memorize the data which can be extracted from the network parameters such as weights and gradient information. The adversarial vulnerability of the deep networks is usually evaluated on the unseen test set of the databases. If the network is memorizing the data, then the small perturbation in the training image data should not drastically change its performance. Based on this assumption, we first evaluate the robustness of deep neural networks on small perturbations added in the training images used for learning the parameters of the network. It is observed that, even if the network has seen the images it is still vulnerable to these small perturbations. Further, we propose a novel data augmentation technique to increase the robustness of deep neural networks to such perturbations.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00