DECOUPLED NON-PARAMETRIC KNOWLEDGE DISTILLATION FOR END-TO-END SPEECH TRANSLATION
Hao Zhang (University of Information Engineering); Nianwen Si (University of Information Engineering); Yaqi Chen (Information Engineering University); Wen-Lin Zhang (National Digital Switching System Engineering and Technological R&D Center); Xukui Yang (ZZ Institute of Advance Technology); Dan Qu (National Digital Switching System Engineering and Technological R&D Center); Zhen Li (University of Information Engineering)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Existing techniques often attempt to make knowledge transfer from a powerful machine translation (MT) to speech translation (ST) model with some elaborate techniques, which often requires transcription as extra input during training. However, transcriptions are not always available, and how to improve the ST model performance without transcription, i.e., data efficiency, has rarely been studied in the literature. In this paper, we propose Decoupled Non-parametric Knowledge Distillation (DNKD) from data perspective to improve the data efficiency. Our method follows the knowledge distillation paradigm. However, instead of obtaining the teacher distribution from a sophisticated MT model, we construct it from a non-parametric datastore via k-Nearest-Neighbor (kNN) retrieval, which removes the dependence on transcription and MT model. Then we decouple the classic knowledge distillation loss into target and non-target distillation to enhance the effect of the knowledge among non-target logits, which is the prominent “dark knowledge”. Experiments on MuST-C corpus show that, the proposed method can achieve consistent improvement over the strong baseline without requiring any transcription.