Representation Learning With Spectro-Temporal-Channel Attention For Speech Emotion Recognition
Lili Guo, Longbiao Wang, Chenglin Xu, Jianwu Dang, Eng Siong Chng, Haizhou Li
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:38
Convolutional neural network (CNN) is found to be effective in learning representation for speech emotion recognition. CNNs do not explicitly model the associations or relative importance of features in the spectral/temporal/channel-wise axes. In this paper, we propose an attention module, named spectro-temporal-channel (STC) attention module that is integrated with CNN to improve representation learning ability. Our module infers an attention map along the three dimensions, namely time, frequency, and CNN channel. Experiments are conducted on the IEMOCAP database to evaluate the effectiveness of the proposed representation learning method. The results demonstrate that the proposed method outperforms the traditional CNN method by an absolute increase of 3.13% in terms of F1 score.
Chairs:
Carlos Busso