Classifying Speech Intelligibility Levels Of Children In Two Continuous Speech Styles
Yeh-Sheng Lin, Shu-Chuan Tseng
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:42
Speech difficulties of children may result from pathological problems. Oral language is normally assessed by expert-directed impressionistic judgments on varying speech types. This paper attempts to construct automatic systems that help detect children with severe speech problems at an early stage. Two continuous speech types, repetitive and storytelling speech, produced by Chinese-speaking hearing and hearing-impaired children are applied to Long Short-Term Memory (LSTM) and Universal Transformer (UT) models. Three approaches to extracting acoustic features are adopted: MFCCs, Mel Spectrogram, and acoustic-phonetic features. Results of leave-one-out cross-validation and models trained by augmented data show that MFCCs are more useful than Mel Spectrogram and acoustic-phonetic features. Respective LSTM and UT models have their own advantages in different settings. Eventually, our model trained on repetitive speech is able to achieve an F1-score of 0.74 for testing on storytelling speech.
Chairs:
Eric Fosler-Lussier