Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:12:03
09 Jun 2021

In this paper, a novel two-branch neural network model structure is proposed for multimodal emotion recognition, which consists of a time synchronous branch (TSB) and a time asynchronous branch (TAB). In order to capture the correlations between each word and its acoustic realisations, TSB couples the speech and text modalities at each time step in an input window and performs pooling across time to form a single embedding vector. TAB, on the other hand, provides cross-utterance information by integrating the sentence embeddings of a number of context utterances into another embedding vector. The final classification of the emotion is performed based on the fusion of TSB and TAB embeddings. Experimental results on the IEMOCAP dataset demonstrated that the two-branch structure achieved state-of-the-art results in 4-way classification with all common testing setups. When using real automatic speech recognition (ASR) output hypotheses instead of the references as the text information, it is shown the cross-utterance information considerably improves the robustness against ASR errors. Further by incorporating an extra class for all the other emotions, our final 5-way classification system with ASR outputs can be viewed as a prototype towards more realistic emotion recognition systems.

Chairs:
Carlos Busso

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00