One-Shot Voice Conversion Based On Speaker Aware Module
Ying Zhang, Hao Che, Chenxing Li, Xiaorui Wang, Zhongyuan Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:12
Voice conversion (VC) is a task to convert the voice of speech while preserving its linguistic content. Although several methods have been proposed to enable VC with non-parallel data, it is still difficult to model the voice without a great number of data or an adaptive process. In this paper, we propose a speaker-aware voice conversion (SAVC) system realizing one-shot voice conversion without an adaptation stage. The SAVC utilizes a speaker aware module (SAM) to disentangle speaker embeddings. The SAM comprises a dynamic reference encoder, a static speaker knowledge block (SKB), and a multi-head attention layer. The reference encoder is used to compress a variable-length utterance to a fixed-length vector, the SKB is made up of pre-extraction x-vectors, and the multi-head attention layer is designed to generate weighted combined speaker embeddings. Subsequently, phonetic posteriorgrams (PPGs) as context encoding are concatenated with speaker embeddings and sent to the decoder module for generating acoustic features. Experimental results on the Aishell-1 corpus show that the proposed method can improve speaker similarity and converted utterances’ speech quality.
Chairs:
Tomoki Toda