Self-Training For Sound Event Detection In Audio Mixtures
Sangwook Park, Ashwin Bellur, David K. Han, Mounya Elhilali
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:15:22
Sound event detection (SED) takes on the task of identifying presence of specific sound events in a complex audio recording. SED has tremendous implications in video analytics, smart speaker algorithms and audio tagging. Recent advances in deep learning have afforded remarkable advances in performance of SED systems; albeit at the cost of extensive labeling efforts to train supervised methods using fully described sound class labels and timestamps. In order to address limitations in availability of training data, this work proposes a self-training technique to leverage unlabeled datasets in supervised learning using pseudo label estimation. This approach proposes a dual-term objective function: a classification loss for the original labels and expectation loss for pseudo labels. The proposed self training technique is applied to sound event detection in the context of the DCASE 2020 challenge, and reports a notable improvement over the baseline system for this task. The self-training approach is particularly effective in extending the labeled database with concurrent sound events.
Chairs:
Justin Salamon