A Deep Neural Network-Driven Feature Learning Method For Polyphonic Acoustic Event Detection From Real-Life Recordings
Manjunath Mulimani, Shashidhar G Koolagudi, Akash B Kademani
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:27
In this paper, a Deep Neural Network (DNN)-driven feature learning method for polyphonic Acoustic Event Detection (AED) is proposed. The proposed DNN is a combination of different layers used to characterize multiple overlapped acoustic events in the mixture. During training, DNN is able to learn the optimal set of discriminative spectral characteristics of the overlapped (polyphonic) acoustic events. The performance of the proposed method is evaluated on the TUT Sound Event 2016 (TUT-SED 2016) real-life dataset and joint Acoustic Scene Classification (ASC) and polyphonic AED dataset. Results show that proposed approach outperforms the state-of-the-art methods.