Skip to main content

Masked Token Similarity Transfer for Compressing Transformer-Based ASR Models

Euntae Choi (Seoul National University); Youshin Lim (42dot); Byeong-Yeol Kim (42dot); Hyung Yong Kim (42dot); Hanbin Lee (42dot); Yunkyu Lim (42dot); Seung Woo Yu (42dot); Sungjoo Yoo (Seoul National University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Recent self-supervised automatic speech recognition (ASR) models based on transformers are showing best performance, but their footprint is too large to be trained on low-resource environments or deployed to edge devices. Knowledge distillation (KD) can be employed to reduce the model size. However, setting embedding dimension of teacher and student network to different values makes it difficult to transfer token embeddings for better performance. To mitigate this issue, we present a novel KD method in which student mimics the prediction vector of teacher under our proposed masked token similarity transfer (MTST) loss where the temporal relation between a token and the other unmasked ones is encoded into a dimension-agnostic token similarity vector. Under our transfer learning setting with a fine-tuned teacher, our proposed methods reduce the model size of student to 28.3% of teacher’s while word error rate on test-clean subset in LibriSpeech corpus is 4.93%, which surpasses prior works. Our source code will be made available.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00