Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 0:12:43
19 Jan 2021

This study explores the use of Transformer-based models for the automated assessment of children's non-native spontaneous speech. Traditional approaches for this task have relied heavily on delivery features (e.g., fluency), whereas the goal of the current study is to build automated scoring models based solely on transcriptions in order to see how well they capture additional aspects of speaking proficiency (e.g., content appropriateness, vocabulary, and grammar) despite the high word error rate (WER) of automatic speech recognition (ASR) on children's non-native spontaneous speech. Transformer-based models are built using both manual transcriptions and ASR hypotheses, and versions of the models that incorporated the prompt text were investigated in order to more directly measure content appropriateness. Two baseline systems were used for comparison, including an attention-based Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) and a Support Vector Regressor (SVR) with manually engineered content-related features. Experimental results demonstrate the effectiveness of the Transformer-based models: the automated prompt-aware model using ASR hypotheses achieves a Pearson correlation coefficient (r) with holistic proficiency scores provided by human experts of 0.835, outperforming both the attention-based RNN-LSTM baseline (r = 0.791) and the SVR baseline (r = 0.767).

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00