Fixed-Point Optimization Of Transformer Neural Network
Yoonho Boo, Wonyong Sung
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:45
The Transformer model adopts a self-attention structure and shows very good performance in various natural language processing tasks. However, it is difficult to implement the Transformer in embedded systems because of its very large model size. In this study, we quantize the parameters and hidden signals of the Transformer for complexity reduction. Not only matrices for weights and embedding but the input and the softmax outputs are also quantized to utilize low-precision matrix multiplication. The fixed-point optimization steps consist of quantization sensitivity analysis, hardware conscious word-length assignment, quantization and retraining, and post-training for improved generalization. We achieved 27.51 BLEU score on the WMT English-to-German translation task with 4-bit weights and 6-bit hidden signals.