Skip to main content

INVESTIGATING ROBUSTNESS OF BIOLOGICAL VS. BACKPROP BASED LEARNING

Yanpeng Zhou, Maosen Wang, Ponnuthurai Nagaratnam Suganthan, Manas Gupta, Arulmurugan Ambikapathi, Ramasamy Savitha

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:59
09 May 2022

Robustness of learning algorithms remains an important problem to be solved from both the perspective of adversarial attacks and improving generalization. In this work, we investigate the robustness of biologically inspired Hebbian learning algorithm in depth. We find that Hebbian learning based algorithms outperform conventional learning algorithms like CNNs by a huge margin of upto 18% on the CIFAR-10 dataset under the addition of noise. We highlight that an important reason for this is the underlying representations that are being learnt by the learning algorithms. Specifically, we find that the Hebbian method learns the most robust representations compared to other methods that helps it to generalize better. We also conduct ablations on the Hebbian network and showcase that robustness of the model drops by upto 16% on the CIFAR-10 dataset if the representation capacity of the network is deteriorated. Hence, we find that the representations learnt play an important role in the resultant robustness of the models. We conduct experiments on multiple datasets and show that the results hold on all the datasets and at various noise levels.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00