Eeg Based Visual Classification With Multi-Feature Joint Learning
Xin Ma, Yiping Duan, Shuzhan Hu, Xiaoming Tao, Ning Ge
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:32
With a significant boost in neuroscience and artificial intelligence, decoding the process of human vision has become a hot topic in the last few decades. Although many existing deep learning models are employed to explore and solve mysteries of human brain activity, the accuracy and reliability of the visual classification task based on electroencephalography (EEG) still have space for promotion. In our research, we design the experiments to collect the subjectsƒ?? EEG data when they are watching the different types of images. In this way, an image-EEG dataset corresponding to 80 ImageNet object classes was constructed. Afterward, we proposed a dual-EEGNet for joint feature learning for multi-category visual classification. Especially, one branch EEGNet is used to extract the spatio-temporal embeddings of EEG signals, and the other branch is used to extract the time-frequency embeddings of EEG signals. The experimental results demonstrate that EEG signals can reflect the human brain activity and distinguish the different types of images. Moreover, the proposed model with joint features has a better classification performance in terms of accuracy compared with other methods.