Skip to main content

Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance

yuanzhe chen (Bytedance); Ming Tu (ByteDance AI Lab); Tang Li (ByteDance Ltd); Xin Li (ByteDance); Qiuqiang Kong (Byte Dance); Jiaxin Li (ByteDance); Zhichao Wang (ByteDance); qiao tian (ByteDance); wang yuping (bytedance); Yuxuan Wang (ByteDance AI Lab)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Streaming voice conversion (VC) is the task of converting the voice of one person to another in real-time. Previous streaming VC methods use phonetic posteriorgrams (PPGs) extracted from automatic speech recognition (ASR) systems to represent speaker-independent information. However, PPGs lack the prosody and vocalization information of the source speaker, and streaming PPGs contain undesired leaked timbre of the source speaker. In this paper, we propose to use intermediate bottleneck features (IBFs) to replace PPGs. VC systems trained with IBFs retain more prosody and vocalization information of the source speaker. Furthermore, we propose a non-streaming teacher guidance (TG) framework that addresses the timbre leakage problem. Experiments show that our proposed IBFs and the TG framework achieve a state-of-the-art streaming VC naturalness of 3.85, a content consistency of 3.77, and a timbre similarity of 3.77 under a future receptive field of 160 ms which significantly outperform previous streaming VC systems.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00