Adaptive Knowledge Distillation Based On Entropy
Kisoo Kwon, Hoshik Lee, Hwidong Na, Nam Soo Kim
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:47
Knowledge distillation (KD) approach is widely used in the deep learning field mainly for model size reduction. KD utilizes soft labels of teacher model, which contain the dark- knowledge that one-hot ground-truth does not have. This knowledge can improve the performance of already saturated student model. In case of multiple-teacher models, generally, the same weighted average (interpolated training) of multiple-teacherâs labels is applied to KD training. However, if the knowledge characteristics among teachers are somewhat different, the interpolated training can be at risk of crushing each knowledge characteristics and can also raise noise component. In this paper, we propose an entropy based KD training, which utilizes the teacher model labels with lower entropy at a larger rate among the various teacher models. The proposed method shows a better performance than the conventional KD training scheme in automatic speech recognition.