Skip to main content

Multi-Classifier Interactive Learning for Ambiguous Speech Emotion Recognition

Ying Zhou (Xidian University); Xuefeng Liang (Xidian University); Yu Gu (School of Artificial Intelligence, Xi'dian University); Yin Yifei (Xidian University); longshan yao (xidian unversity)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
09 Jun 2023

In recent years, speech emotion recognition technology is of great significance in widespread applications such as call centers, social robots and health care. Thus, the speech emotion recognition has been attracted much attention in both industry and academic. Since emotions existing in an entire utterance may have varied probabilities, speech emotion is likely to be ambiguous, which poses great challenges to recognition tasks. However, previous studies commonly assigned a single-label or multi-label to each utterance in certain. Therefore, their algorithms result in low accuracies because of the inappropriate representation. Inspired by the optimally interacting theory, we address the ambiguous speech emotions by proposing a novel multi-classifier interactive learning (MCIL) method. In MCIL, multiple different classifiers first mimic several individuals, who have inconsistent cognitions of ambiguous emotions, and construct new ambiguous labels (the emotion probability distribution). Then, they are retrained with the new labels to interact with their cognitions. This procedure enables each classifier to learn better representations of ambiguous data from others, and further improves the recognition ability. The experiments on three benchmark corpora (MAS, IEMOCAP, and FAU-AIBO) demonstrate that MCIL does not only improve each classifier’s performance, but also raises their recognition consistency from moderate to substantial.