MGAT: Multi-granularity Attention based Transformers for Multi-modal Emotion Recognition
Weiquan Fan (South China University of Technology); Xiaofen Xing ( South China University of Technology); Bolun Cai (Shopee); Xiangmin Xu (South China University of Technology)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Multi-modal emotion recognition is crucial for human-computer interaction. Many existing algorithms attempt to achieve multi-modal interactions through a cross-attention mechanism. Due to the problems of noise introduction and heavy computation in the original attention mechanism, window attention has become a new trend. However, emotions are presented asynchronously between different modalities, which makes it difficult to interact with emotional information between windows. Furthermore, multi-modal data are temporally misaligned, so single fixed window size is hard to describe cross-modal information. In this paper, we put these two issues into a unified framework and propose the multi-granularity attention based Transformers (MGAT). It addresses the emotional asynchrony and modality misalignment issues through a multi-granularity attention mechanism. Experimental results confirm the effectiveness of our method and the state-of-the-art performance is achieved on IEMOCAP.