MULTILEVEL TRANSFORMER FOR MULTIMODAL EMOTION RECOGNITION
Junyi He (360 DigiTech); Meimei Wu (360DigiTech); Meng Li (360 DigitalTech); Xiaobo Zhu (360DigiTech); Feng Ye (360DigiTech, Inc.)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Multimodal emotion recognition has attracted much attention recently. Fusing multiple modalities effectively with limited labeled data is a challenging task. Considering the success of pre-trained model and fine-grained nature of emotion expression, we think it is reasonable to take these two aspects into consideration. Unlike previous methods that mainly focus on one aspect, we introduce a novel multi-granularity framework, which combines fine-grained representation with pre-trained utterance-level representation. Inspired by Transformer TTS, we propose a multilevel transformer model to perform fine-grained multimodal emotion recognition. Specifically, we explore different methods to incorporate phoneme-level embedding with word-level embedding. To perform multi-granularity learning, we simply combine multilevel transformer model with Bert. Extensive experimental results show that multilevel transformer model outperforms previous state-of-the-art approaches on IEMOCAP dataset. Multi-granularity model achieves additional performance improvement.