Expression-Guided Eeg Representation Learning For Emotion Recognition
Soheil Rayatdoost, David Rudrauf, Mohammad Soleymani
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:09
Learning a joint and coordinated representation between different modalities can improve multimodal emotion recognition. In this paper, we propose a deep representation learning approach for emotion recognition from electroencephalogram (EEG) signals guided by facial electromyogram (EMG) and electrooculogram (EOG) signals. We recorded EEG, EMG and EOG signals from 60 participants who watched 40 short emotion-inducing videos and self-reported their felt emotions. We designed a cross-modal encoder that jointly learns the features extracted from facial and ocular expressions and EEG responses for emotion recognition. We evaluated the method on our recorded data and MAHOB-HCI, a publicly available database, for EEG-based emotion recognition. We demonstrate that the proposed representation is able to improve emotion recognition performance. We also show that the learned representation can be transferred to a different database without EMG and EOG and achieve superior performance. Methods that fuse behavioral and neural responses can be deployed in wearable emotion recognition solutions, practical in virtual reality in which computer vision expression recognition is not feasible.