Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:45
10 Jun 2021

In this paper, we explore the neural architecture search (NAS) for automatic speech recognition (ASR) systems. We conduct the architecture search on the small proxy dataset, and then evaluate the network, constructed from the searched architecture, on the large dataset. Specially, we propose a revised search space that theoretically facilitates the search algorithm to explore the architectures with low complexity. Extensive experiments show that: (i) the architecture learned in the revised search space can greatly reduce the computational overhead and GPU memory usage with mild performance degradation. (ii) the searched architecture can achieve more than 15% (average on the four test sets) relative improvements on the large dataset, compared with our best hand-designed DFSMN-SAN architecture. To the best of our knowledge, this is the first report of NAS results with a large scale dataset (up to 10K hours), indicating the promising application of NAS to industrial ASR systems.

Chairs:
Xiaodong Cui

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00