Improving Few-Shot Learning for Talking Face System with TTS Data Augmentation
Qi Chen (Shanghai Jiao Tong University); Ziyang Ma (Shanghai Jiao Tong University); Tao Liu (Shanghai Jiao Tong University); Xu Tan (Microsoft Research Asia); Qu Lu (Shanghai Media Tech); Kai Yu (Shanghai Jiao Tong University); Xie Chen (Shanghai Jiaotong University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Audio-driven talking face has attracted broad interest from academia and industry recently. However, data acquisition and labeling in audio-driven talking face are labor-intensive and costly. The lack of data resource results in poor synthesis effect. To alleviate this issue, we propose to use TTS (Text-To-Speech) for data augmentation to improve few-shot ability of the talking face system. The misalignment problem brought by the TTS audio is solved with the introduction of soft-DTW, which is first adopted in the talking face task. Moreover, features extracted by HuBERT are explored to utilize underlying information of audio, and found to be superior over other features. The proposed method achieves 17%, 14%, 38% dominance on MSE score, DTW score and user study preference repectively over the baseline model, which shows the effectiveness of improving few-shot learning for talking face system with TTS augmentation.