Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:58
11 May 2022

Quantization is an important approach in making hardware-friendly implementation. However, while various quantization techniques have been extensively explored in deep learning for reducing the memory and computational footprint of the models, similar investigations are few in neuromorphic computing, which is supposed to demonstrate high power and memory efficiency over its more traditional counterpart. In this work, we explore quantization-aware-training (QAT) for SNNs as well as fully quantized transfer-learning using the DECOLLE learning algorithm as the basis system, whose local loss based learning is bio-plausible, avoids complex back-propagation-through-time and potentially hardware-friendly. We also evaluate different rounding functions, and analyze their effects on learning. We validate our results on two datasets, DVS-Gestures, and N-MNIST, where we reach within 0.3% difference from full precision accuracy for both datasets using only 3-bit weights with a convolutional neural network. We are currently exploring other datasets to understand the generalizability of the explored quantization schemes.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00