Disentangled Speaker And Language Representations Using Mutual Information Minimization And Domain Adaptation For Cross-Lingual Tts
Detai Xin, Tatsuya Komatsu, Shinnosuke Takamichi, Hiroshi Saruwatari
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:40
We propose a method for obtaining disentangled speaker and language representations via mutual information minimization and domain adaptation for cross-lingual text-to-speech (TTS) synthesis. The proposed method extracts speaker and language embeddings from acoustic features by a speaker encoder and a language encoder. Then the proposed method applies domain adaptation on the two embeddings to obtain language-invariant speaker embedding and speaker-invariant language embedding. To get more disentangled representations, the proposed method further uses mutual information minimization between the two embeddings to remove entangled information within each embedding. Disentangled representations of speaker and language are critical for cross-lingual TTS synthesis since entangled representations make it difficult to maintain speaker identity information when changing the language representation and consequently causes performance degradation. We evaluate the proposed method using English and Japanese multi-speaker datasets with a total of 207 speakers. Experimental result demonstrates that the proposed method significantly improves the naturalness and speaker similarity of both intra-lingual and cross-lingual TTS synthesis. Furthermore, we show that the proposed method has a good capability of maintaining the speaker identity between languages.
Chairs:
Hung-yi Lee