AD-YOLO: YOU LOOK ONLY ONCE IN TRAINING MULTIPLE SOUND EVENT LOCALIZATION AND DETECTION
Jin Sob Kim (Korea University); Hyun Joon Park (Korea University); Wooseok Shin (Korea University); Sung Won Han (Korea University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Sound event localization and detection (SELD) combines the identification of sound events with the corresponding directions of arrival (DOA). Recently, event-oriented track output formats have been adopted to solve this problem; however, they still have limited generalization toward real-world problems in an unknown polyphony environment. To address the issue, we proposed an angular-distance-based multiple SELD (AD-YOLO), which is an adaptation of the “You Look Only Once” algorithm for SELD. The AD-YOLO format allows the model to learn sound occurrences location-sensitively by assigning class responsibility to DOA predictions. Hence, the format enables the model to handle the polyphony problem, regardless of the number of sound overlaps. We evaluated AD-YOLO on DCASE 2020-2022 challenge Task 3 datasets using four SELD objective metrics. The experimental results show that AD-YOLO achieved outstanding performance overall and also accomplished robustness in class-homogeneous polyphony environments.