One Shot Learning For Speech Separation
Yuan-Kuei Wu, Kuan-Po Huang, Yu Tsao, Hung-yi Lee
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:15:13
Despite the recent success of speech separation models, they fail to separate sources properly while facing different sets of people or noisy environments. To tackle this problem, we proposed to apply meta-learning to the speech separation task. We aimed to find a meta-initialization model, which can quickly adapt to new speakers by seeing only one mixture generated by those people. In this paper, we use model-agnostic meta-learning(MAML) algorithm and almost no inner loop(ANIL) algorithm in Conv-TasNet to achieve this goal. The experiment results show that our model can adapt not only to a new set of speakers but also noisy environments. Furthermore, we found out that the encoder and decoder serve as the feature-reuse layers, while the separator is the task-specific module.
Chairs:
Takuya Yoshioka