Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:46
10 May 2022

This paper proposes a new model, called hierarchical and multi-view dependency modelling network (HMVDM), for the task of emotion recognition in conversations (ERC). The modelling of conversational context plays an important role in ERC, especially for the multi-turn and multi-speaker conversations which hold complex dependency between different speakers. In our proposed HMVDM, we model the dependency between different speakers at both token-level and utterance-level. Specifically, the HMVDM model has a hierarchical structure with two main modules: 1) token-level dependency modelling module (TDM), which aims to learn the long-range token-level dependency between different utterances in a speaker-aware manner and output the utterance representation; 2) utterance-level dependency modelling module (UDM), which accepts the utterance representation from TDM as inputs and aims to learn the utterance-level dependency from intra-, inter-, and global-speaker(s) view simultaneously. Extensive experiments are conducted on four ERC benchmark datasets with state-of-the-art models employed as baselines for comparison. The empirical results demonstrate the superiority of our proposed HMVDM model and confirm the importance of hierarchical and multi-view context dependency modelling for ERC.

More Like This