Improving Speaker Discrimination Of Target Speech Extraction With Time-Domain Speakerbeam
Marc Delcroix, Tsubasa Ochiai, Keisuke Kinoshita, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki, Kate?ina ŽmolÃková
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:26
Target speech extraction, which extracts a single target source in a mixture given clues about the target speaker, has attracted increasing attention. We have proposed SpeakerBeam, which exploits an adaptation utterance of the target speaker to extract his/her voice characteristics that are then used to guide a neural network towards extracting speech of that speaker. SpeakerBeam presents a practical alternative to speech separation as it enables tracking speech of a target speaker across utterances, and achieves promising speech extraction performance. However, it sometimes fails when speakers have similar voice characteristics, such as in same-gender mixtures, because it is difficult to discriminate the target speaker. In this paper, we investigate strategies for improving the speaker discrimination capability of SpeakerBeam. First, we propose a time-domain implementation of SpeakerBeam similar to that proposed for a time-domain audio separation network (TasNet), which has achieved high performance for speech separation. Besides, we investigate the use of spatial features to better discriminate speakers when microphone array recordings are available, adding an auxiliary speaker identification loss for helping to learn more discriminant voice characteristics. We show experimentally that these strategies improve speech extraction performance, especially for same-gender mixtures, and outperform TasNet in terms of target speech extraction.