Skip to main content

Training Deep Spiking Neural Networks For Energy-Efficient Neuromorphic Computing

Gopalakrishnan Srinivasan, Chankyu Lee, Abhronil Sengupta, Priyadarshini Panda, Syed Shakib Sarwar, Kaushik Roy

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:40
04 May 2020

Spiking Neural Networks (SNNs) encode input information temporally using sparse spiking events, which can be harnessed to achieve higher computational efficiency. However, considering the rapid strides in accuracy enabled by Analog Neural Networks (ANNs), SNN training algorithms are much less mature. We propose different SNN training methodologies, varying in degrees of bio-fidelity, and evaluate their efficacy on complex datasets. First, we present bio-plausible Spike Timing Dependent Plasticity (STDP) based stochastic algorithms for unsupervised learning in SNNs. Our analysis on CIFAR-10 indicates that STDP-based learning rules enable the convolutional layers to self-learn low-level features using fewer training examples. However, STDP-based learning is limited to shallow SNNs yielding considerably lower than state-of-the-art accuracy. In order to scale the SNNs deeper, we propose conversion methodology to map off-the-shelf trained ANN to SNN. We demonstrate 69.96% accuracy for VGG16-SNN on ImageNet. However, ANN-to-SNN conversion leads to high inference latency for achieving the best accuracy. In order to minimize the inference latency, we propose spike-based error backpropagation algorithm using differentiable approximation for the spiking neuron. Our experiments on CIFAR-10 show that spike-based error backpropagation effectively captures temporal statistics to reduce the inference latency by up to 8× compared to converted SNNs while yielding comparable accuracy.