Generalized Knowledge Distillation From An Ensemble Of Specialized Teachers Leveraging Unsupervised Neural Clustering
Takashi Fukuda, Gakuto Kurata
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:28
This paper proposes an improved generalized knowledge distillation framework with multiple dissimilar teacher networks, each of which is specialized for a specific domain, to make a deployable student network more robust to challenging acoustic environments. In this paper, we first address a method to partition the training data for constructing ensembles of the teachers from unsupervised neural clustering with features based on context-dependent phonemes representing each acoustic domain. Second, we illustrate how a single student network is effectively trained with multiple specialized teachers designed from partitioned data. During the training step, the weights of the student network are updated using a composite two-part cross entropy loss obtained from a pair consisting of a specialized teacher corresponding to input speech and a generalized teacher trained with a balanced data set. Unlike system combination methods, we aim to incorporate the benefits from multiple models into a single student network via knowledge distillation that does not increase any computational costs during the decoding time. The improvement of the proposed technique is shown on acoustically diverse signals contaminated by challenging practical noises.
Chairs:
Abdelrahman Mohamed