Voice-preserving Zero-shot Multiple Accent Conversion
Mumin Jin (MIT); Prashant Serai (Meta AI); Jilong Wu (Meta AI); Andros Tjandra (Meta Platforms, Inc); Vimal Manohar (Meta Platforms Inc. ); Qing He (Meta)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Most people who have tried to learn a foreign language would have experienced difficulties understanding or speaking with a native speaker's accent. For native speakers, understanding or speaking a new accent is likewise a difficult task. An accent conversion system that changes a speaker’s accent but preserves that speaker’s voice identity, such as timbre and pitch, has the potential for a range of applications, such as communication, language learning, and entertainment. Existing accent conversion models tend to change the speaker identity and accent at the same time. Here, we use adversarial learning to disentangle accent dependent features while retaining other acoustic characteristics. What sets our work apart from existing accent conversion models is the capability to convert an unseen speaker’s utterance to multiple accents while preserving its original voice identity. Subjective evaluations show that our model generates audio that sound closer to the target accent and like the original speaker.