Evolutionary Quantization Of Neural Networks With Mixed-Precision
Zhenhua Liu, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Wen Gao
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:50
Quantization is an effective way for reducing the memory and computation costs of deep neural networks. Most of existing methods exploit the fixed-precision quantization approach, e.g., weights and activations (i.e., output features) are represented as 8-bit values. Although mixed-precision quantization provides us a greater possibility to efficiently allocate computation resources and maintain the network performance, it is difficult to accurately solve the optimal bit-width of each layer. In this paper, we develop a novel evolutionary based method to automatically determine the bit-widths of weights and activations in each convolutional layer, namely, Evolutionary Mixed-Precision Quantization (EMQ). Specifically, the quantization intervals of weights and activations of all layers in the given network will be simultaneously encoded as an individual. The fitness of each individual is calculated as the performance of the corresponding quantized network. The optimal quantization result will be updated and elected during the evolutionary search. Extensive experiments conducted on benchmark datasets and models demonstrate the effectiveness of the proposed method over the state-of-the-art network quantization algorithms.
Chairs:
Jinyu Li