Unified Keyword Spotting and Audio Tagging on Mobile Devices with Transformers
Heinrich Dinkel (Xiaomi Techonology); Yongqing Wang (Xiaomi); Zhiyong Yan (Xiaomi); Junbo Zhang (Xiaomi); Yujun Wang (xiaomi)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Keyword spotting (KWS) is a core human-machine-interaction front-end task for most modern intelligent assistants.
Recently, a unified (UniKW-AT) framework has been proposed that adds additional capabilities in the form of audio tagging (AT) to a KWS model.
However, previous work did not consider the real-world deployment of a UniKW-AT model, where factors such as model size and inference speed are more important than performance alone.
This work introduces three mobile-device deployable models named Unified Transformers (UiT).
Our best model achieves an mAP of 34.09 on Audioset, and an accuracy of 97.76 on the public Google Speech Commands V1 dataset.
Further, we benchmark our proposed approaches on four mobile platforms, revealing that the proposed UiT models can achieve a speedup of 2 - 6 times against a competitive MobileNetV2.