Skip to main content

Speech Recognition Model Compression

Madhumitha Sakthi, Ahmed Tewfik, Raj Pawate

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:31
04 May 2020

Deep Neural Network-based speech recognition systems are widely used in most speech processing applications. To achieve better model robustness and accuracy, these networks are constructed with millions of parameters, making them storage and compute-intensive. In this paper, we propose Bin & Quant (B&Q), a compression technique using which we were able to reduce the Deep Speech 2 speech recognition model size by 7 times for a negligible loss in accuracy. We have shown that our algorithm is generally beneficial based on its effectiveness across two other speech recognition models and the VGG16 model. In this paper, we have empirically shown that Recurrent Neural Networks (RNNs) are more sensitive to model parameter perturbation than Convolutional Neural Networks (CNNs), followed by fully connected(FC) networks. Using our B&Q technique, we have shown that we can establish parameter sharing across layers instead of just within a particular layer.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00