Learned Transferable Architectures Can Surpass Hand-Designed Architectures For Large Scale Speech Recognition
Liqiang He, Dan Su, Dong Yu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:45
In this paper, we explore the neural architecture search (NAS) for automatic speech recognition (ASR) systems. We conduct the architecture search on the small proxy dataset, and then evaluate the network, constructed from the searched architecture, on the large dataset. Specially, we propose a revised search space that theoretically facilitates the search algorithm to explore the architectures with low complexity. Extensive experiments show that: (i) the architecture learned in the revised search space can greatly reduce the computational overhead and GPU memory usage with mild performance degradation. (ii) the searched architecture can achieve more than 15% (average on the four test sets) relative improvements on the large dataset, compared with our best hand-designed DFSMN-SAN architecture. To the best of our knowledge, this is the first report of NAS results with a large scale dataset (up to 10K hours), indicating the promising application of NAS to industrial ASR systems.
Chairs:
Xiaodong Cui