Improving Efficiency In Large-Scale Decentralized Distributed Training
Wei Zhang, Xiaodong Cui, Abdullah Kayi, Ulrich Finkler, Brian Kingsbury, George Saon, Youssef Mroueh, Alper Buyuktosunoglu, Payel Das, David Kung, Michael Picheny, Mingrui Liu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:59
Decentralized Parallel SGD (D-PSGD) and its asynchronous variant Asynchronous Parallel SGD (AD-PSGD) is a family of distributed learning algorithms that have been demonstrated to perform well for large-scale deep learning tasks. One drawback of (A)D-PSGD is the spectral gap in the mixing matrix of (A)D-PSGD decreases when the number of learners in the system increases, which hampers the convergence. In this paper, we investigate techniques to accelerate (A)D-PSGD based training by improving spectral gap while minimizing the communication cost. We demonstrate the effectiveness of our proposed techniques by running experiments on the 2000-hour Switchboard speech recognition task and the ImageNet computer vision task. On an IBM P9 supercomputer, our system is able to train an LSTM acoustic model in 2.28 hours with 7.5% WER on the Hub5-2000 Switchboard (SWB) test set and 13.3% WER on the CallHome (CH) test set using 64 V100 GPUs and in 1.98 hours with 7.7% WER on SWB and 13.3% WER on CH using 128 V100 GPUs, the fasted training time reported to date.