Skip to main content

ONE-SHOT LAYER-WISE ACCURACY APPROXIMATION FOR LAYER PRUNING

Sara Elkerdawy, Mostafa Elhoushi, Abhineet Singh, Hong Zhang, Nilanjan Ray

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:06
28 Oct 2020

Recent advances in neural networks pruning have made it possible to remove a large number of filters without any perceptible drop in accuracy. However, the gain in speed depends on the number of filters per layer. In this paper, we propose a one-shot layer-wise proxy classifier to estimate layer importance that in turn allows us to prune a whole layer. In contrast to existing filter pruning methods which attempt to reduce the layer width of a dense model, our method reduces its depth and can thus guarantee inference speed up. In our proposed method, we first go through the training data once to construct proxy classifiers for each layer using imprinting. Next, we prune layers with smallest accuracy difference from their preceding layer till a latency budget is achieved. Finally, we fine-tune the newly pruned model to improve accuracy. Experimental results showed 43.70% latency reduction with 1.27% accuracy increase on CIFAR100 for the pruned VGG19. Further, we achieved 16% and 25% latency reduction with 0.58% increase and 0.01% decrease in accuracy respectively on ImageNet for ResNet-50. The major advantage of our proposed method is that these latency reductions cannot be achieved with existing filter pruning methods as they are bounded by the original model's depth. Code is available at https://github.com/selkerdawy/one-shot-layer-pruning.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00