Skip to main content

Fast Training Of Deep Neural Networks For Speech Recognition

Guojing Cong, Brian Kingsbury, Tianyi Liu, Chih-Chieh Yang

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:26
04 May 2020

Training large, deep neural network acoustic models for speech recognition on large datasets takes a long time on a single GPU, motivating research on parallel training algorithms. We present an approach for training a bidirectional LSTM acoustic model on the 2000-hour Switchboard corpus. The model we train achieves state-of-the-art word error rate, 7.5\% on the Hub5-2000 Switchboard test set and 13.1\% on the Callhome test set, and scales to an unprecedented 96 learners while employing only 12 global reductions per epoch of training. As our implementation incurs far fewer reductions than prior work, it does not require aggressively optimized communication primitives to reach state-of-the-art performance in a short amount of time. Using 48 NVIDIA V100 GPUs takes 5 hours; with 96 GPUs, training takes around 3 hours.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00