JOINT PRE-TRAINING WITH SPEECH AND BILINGUAL TEXT FOR DIRECT SPEECH TO SPEECH TRANSLATION
Kun Wei (School of Computer Science, Northwestern Polytechnical University); Long Zhou (Microsoft Research Asia); Ziqiang Zhang (University of Science and Technology of China); LIPING CHEN (Microsoft); Shujie Liu (Microsoft Research Asia); Lei He (Microsoft Cloud and AI); Jinyu Li (Microsoft); Furu Wei (Microsoft Research Asia)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Direct speech-to-speech translation (S2ST) is an attractive research topic with many advantages compared to cascaded S2ST. However, direct S2ST suffers from the data scarcity problem because the corpora from speech of the source language to speech of the target language are very rare. To address this issue, we propose in this paper a Speech2S model, which is jointly pre-trained with unpaired
speech and bilingual text data for direct speech-to-speech translation tasks. By effectively leveraging the paired text data, Speech2S is capable of modeling the cross-lingual speech conversion from source to target language. We verify the performance of the proposed Speech2S on Europarl-ST and VoxPopuli datasets. Experimental results demonstrate that Speech2S gets an improvement of
about 5 BLEU scores compared to encoder-only pre-training models, and achieves a competitive or even better performance than existing state-of-the-art models.