Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:42
10 Jun 2021

Speech difficulties of children may result from pathological problems. Oral language is normally assessed by expert-directed impressionistic judgments on varying speech types. This paper attempts to construct automatic systems that help detect children with severe speech problems at an early stage. Two continuous speech types, repetitive and storytelling speech, produced by Chinese-speaking hearing and hearing-impaired children are applied to Long Short-Term Memory (LSTM) and Universal Transformer (UT) models. Three approaches to extracting acoustic features are adopted: MFCCs, Mel Spectrogram, and acoustic-phonetic features. Results of leave-one-out cross-validation and models trained by augmented data show that MFCCs are more useful than Mel Spectrogram and acoustic-phonetic features. Respective LSTM and UT models have their own advantages in different settings. Eventually, our model trained on repetitive speech is able to achieve an F1-score of 0.74 for testing on storytelling speech.

Chairs:
Eric Fosler-Lussier

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $85.00
    Non-members: $100.00