Skip to main content

Understanding Shared Speech-Text Representations

Yuan Wang (Google); Kyle Kastner (Google); Zhehuai Chen (Google); Ankur Bapna (Google Research); Andrew Rosenberg (Google LLC); Bhuvana Ramabhadran (Google); Yu Zhang (Google)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Recently, a number of approaches to train speech models by incorpo-rating text into end-to-end models have been developed, with Mae-stro advancing state-of-the-art automatic speech recognition (ASR)and Speech Translation (ST) performance. In this paper, we expandour understanding of the resulting shared speech-text representationswith two types of analyses. First we examine the limits of text-onlydomain adaptation, finding that a corpus-specific duration model forspeech-text alignment is the most important component for learninga shared speech-text representation. Second, we inspect the similar-ities between activations of uni-modal (speech or text) encoders ascompared to the activations of a shared encoder. We find that theshared encoder learns a more compact and overlapping speech-textrepresentation than the uni-modal encoders. We hypothesize that thispartially explains the effectiveness of the Maestro shared speech-textrepresentations.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00