Mapping Functional Changes in The Embryonic Heart of Atlantic Salmon Post Viral infection Using Ai Technique
S. Chattopadhyay, A. Malachowski, J. K. Swain, R. A. Dalmo, A. Horsch, D. K. Prasad
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:59
We studied the quantization of neural networks for their compression and representation without retraining. The goal is to facilitate neural network representation and deployment in standard formats so that general networks may have their weights quantized and entropy coded within the deployment format. We relate weight entropy and model accuracy and try to evaluate distribution of weights against known distributions. Many scalar quantization strategies were tested. We have found that weights are typically approximated by a Laplacian distribution for which optimal quantizers are approximated by entropy-coded uniform quantizers with deadzones. Results indicate that it is possible to reduce 8-fold the size of the popular image classification networks with accuracy losses near 1%.