I3D: Transformer architectures with input-dependent dynamic depth for speech recognition
Yifan Peng (Carnegie Mellon University); Jaesong Lee (NAVER); Shinji Watanabe (Carnegie Mellon University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Transformer-based end-to-end speech recognition has achieved great success. However, the large footprint and computational overhead make it difficult to deploy these models in some real-world applications. Model compression techniques can reduce the model size and speed up inference, but the compressed model has a fixed architecture which might be suboptimal. We propose a novel Transformer encoder with Input-Dependent Dynamic Depth (I3D) to achieve strong performance-efficiency trade-offs. With a similar number of layers at inference time, I3D-based models outperform the vanilla Transformer and the static pruned model via iterative layer pruning. We also present interesting analysis on the gate probabilities and the input-dependency, which helps us better understand deep encoders.