Multi-Task Learning For Voice Trigger Detection
Siddharth Sigtia, Pascal Clark, Rob Haynes, Hywel Richards, John Bridle
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:09
We describe the design of a voice trigger detection system for smart speakers. We address two major challenges. The first is that the detectors are deployed in complex acoustic environments with external noise and loud playback by the device itself. Secondly, collecting training examples for a specific keyword or trigger phrase is challenging. We describe a two-stage cascaded architecture where a low-power detector is always running. If a detection is made at this stage, the candidate audio segment is re-scored by larger models to verify that the segment contains the trigger phrase. In this study, we focus our attention on the architecture and design of these second-pass detectors. We start by training a general acoustic model that produces phonetic transcriptions given a large labelled training dataset. Next, we collect a much smaller dataset of examples that are challenging for the baseline system. We then use multi-task learning to train a model to simultaneously produce accurate phonetic transcriptions on the larger dataset and discriminate between true and easily confusable examples using the smaller dataset. Our results demonstrate that the proposed model reduces errors by half compared to the baseline in a range of challenging test conditions without requiring extra parameters.