Multi-Conditioning And Data Augmentation Using Generative Noise Model For Speech Emotion Recognition In Noisy Conditions
Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 11:49
Degradation due to additive noise is a significant road block in the real-life deployment of Speech Emotion Recognition (SER) systems. Most of the previous work in this field dealt with the noise degradation either at the signal or at the feature level. In this paper, to address the robustness aspect of the SER in additive noise scenarios, we propose multi-conditioning and data augmentation using an utterance level parametric generative noise model. The generative noise model is designed to generate noise types which can span the entire noise space in the mel-filterbank energy domain. This characteristic of the model renders the system robust against unseen noise conditions. The generated noise types can be used to create multi-conditioned data for training the SER systems. Multi-conditioning approach can also be used to increase the training data by many folds where such data is limited. We report the performance of the proposed method on two datasets, namely EmoDB and IEMOCAP. We also explore multi-conditioning and data augmentation using noise samples from NOISEX-92 noise database.