Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:57
18 Oct 2022

A stochastic binary-ternary (SBT) quantization approach is introduced for communication efficient federated computation; form of collaborative computing where locally trained models are exchanged between institutes. Communication of deep neural network models could be highly inefficient due to their large size. This motivates model compression in which quantization is an important step. Two well-known quantization algorithms are binary and ternary quantization. The first leads into good compression, sacrificing accuracy. The second provides good accuracy with less compression. To better benefit from trade-off between accuracy and compression, we propose an algorithm to stochastically switch between binary and ternary quantization. By combining with uniform quantization, we further extend the proposed algorithm to a hierarchical method which results in even better compression without sacrificing the accuracy. We tested the proposed algorithm using Neural network Compression Test Model (NCTM) provided by MPEG community. Our results demonstrate that the hierarchical variant of the proposed algorithm outperforms other quantization algorithms in term of compression, while maintaining the accuracy competitive to that provided by other methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00