Skip to main content

Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition

Saumya Yashmohini Sahai (Amazon); Jing Liu (Amazon.com); Thejaswi Muniyappa (Amazon); Kanthashree Mysore Sathyendra (Amazon); Anastasios Alexandridis (Amazon.com); Grant Strimel (Amazon); Ross McGowan (Amazon); Ariya Rastrow (Amazon Alexa); Athanasios Mouchtaris (Amazon Alexa); Feng-Ju Chang (Amazon); Siegfried Kunzmann (Amazon)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

We present dual-attention neural biasing, an architecture designed to boost Wake Words (WW) recognition and improve inference time latency on speech recognition tasks. This architecture enables a dynamic switch for its runtime compute paths by exploiting WW spotting to select which branch of its attention networks to execute for an input audio frame. With this approach, we effectively improve WW spotting accuracy while saving runtime compute cost as defined by floating point operations (FLOPs). Using an in-house dataset, we demonstrate that the proposed dual-attention network can reduce the compute cost by 90% for WW audio frames, with only 1% increase in the number of parameters. This architecture improves WW F1 score by 16% relative and improves generic rare word error rate by 3% relative compared to the baselines.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00