Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:38
08 Jun 2021

Recently, many model quantization approaches have been investigated to reduce the model size and improve the inference speed of convolutional neural networks (CNNs). However, these approaches usually inevitably lead to a decrease in classification accuracy. To address this problem, this paper proposes a mixed precision quantization method combined with channel expansion of CNNs by using a multi-objective genetic algorithm, called MOGAQNN. In MOGAQNN, each individual in the population is used to encode a mixed precision quantization policy and a channel expansion policy. During the evolution process, the two polices are optimized simultaneously by the non-dominated sorting genetic algorithm II (NSGA-II). Finally, we choose the best individual in the last population and evaluate its performance on the test set as the final performance. The experimental results of five popular CNNs on two benchmark datasets demonstrate that MOGAQNN can greatly reduce the model size and improve the classification accuracy at the same time.

Chairs:
Jinyu Li

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $85.00
    Non-members: $100.00