Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 07:24
28 Oct 2020

Network quantization has been widely studied to compress the deep neural network in mobile devices. Conventional methods quantize the network parameters of all layers with the same fixed precision, regardless of the number of parameters in each layer. However, quantizing the weights of the layer with many parameters is more effective in reducing the model size. Accordingly, in this paper, we propose a novel mixed-precision quantization method based on reinforcement learning. Specifically, we utilize the number of parameters at each layer as a prior for our framework. By using the accuracy and the bit-width as a reward, the proposed framework determines the optimal quantization policy for each layer. By applying this policy sequentially, we achieve weighted-average 2.97 bits for the VGG-16 model on the CIFAR-10 dataset with no degradation of the accuracy, compared with its full precision baseline. We also show that our framework can provide an optimal quantization policy for the VGG-Net and the ResNet to minimize the storage while preserving the accuracy.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00