Surrogate Source Model Learning For Determined Source Separation
Robin Scheibler, Masahito Togami
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:13:44
We propose to learn surrogate functions of universal speech priors for determined blind speech separation. Deep speech priors are highly desirable due to their superior modelling power, but are not compatible with state-of-the-art independent vector analysis based on majorization-minimization (AuxIVA), since deriving the required surrogate function is not easy, nor always possible. Instead, we do away with exact majorization and directly approximate the surrogate. Taking advantage of iterative source steering (ISS) updates, we back propagate the permutation invariant separation loss through multiple iterations of AuxIVA. ISS lends itself well to this task due to its lower complexity and lack of matrix inversion. Experiments show large improvements in terms of scale invariant signal-to-distortion (SDR) ratio and word error rate compared to baseline methods. Training is done on two speakers mixtures and we experiment with two losses, SDR and coherence. We find that the learnt approximate surrogate generalizes well on mixtures of three and four speakers without any modification. We also demonstrate generalization to a different variation of the AuxIVA update equations. The SDR loss leads to fastest convergence in iterations, while coherence leads to the lowest word error rate (WER). We obtain as much as 36% reduction in WER.
Chairs:
Minje Kim