Skip to main content

Sequence-Level Self-Teaching Regularization

Eric Sun, Liang Lu, Zhong Meng, Yifan Gong

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:28
08 Jun 2021

In our previous research, we proposed frame-level self-teaching networks to regularize the deep neural networks during training. In this paper, we extend the previous approach and propose sequence self-teaching network to regularize the sequence-level information in speech recognition. The idea is to generate the sequence-level soft supervision labels from the top layer of the network to supervise the training of lower layer parameters. The network is trained with an auxiliary criterion in order to reduce the sequence-level Kullback-Leibler (KL) divergence between the top layer and lower layers, where the posterior probabilities in the KL-divergence term is computed from a lattice at the sequence-level. We evaluated the sequence-level self-teaching regularization approach with bidirectional long short-term memory (BLSTM) models on LibriSpeech task, and show consistent improvements over the discriminative sequence maximum mutual information (MMI) trained baseline.

Chairs:
Zhong Meng

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00