Speech Intelligibility Classifiers from 550k Disordered Speech Samples
Subhashini Venugopalan (Google); Jimmy Tobin (Google); Samuel J. Yang (Google); Katie Seaver (Google); Richard Cave (Google); Pan-Pan Jiang (Google); Neil Zeghidour (Google); Rus Heywood (Google); Jordan Green (MGH Institute of Health Professions); Michael Brenner (Google/Harvard)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
We developed dysarthric speech intelligibility classifiers on 551,176 disordered speech samples contributed by a diverse set of 468 speakers, with a range of self-reported speaking disorders and rated for their overall intelligibility on a five-point scale. We trained three models following different deep learning approaches and evaluated them on ~94K utterances from 100 speakers. We further found the models to generalize well (without further training) on the TORGO database (100% accuracy), UASpeech (0.93 correlation), ALS-TDI PMP (0.81 AUC) datasets as well as on a dataset of realistic unprompted speech we gathered (106 dysarthric and 76 control speakers,~2300 samples). We share our model to advance research in this domain.