CROSS-SUBJECT MENTAL FATIGUE DETECTION BASED ON SEPARABLE SPATIO-TEMPORAL FEATURE AGGREGATION
Yalan Ye (University of Electronic Science and Technology of China); Yutuo He (University of Electronic Science and Technology of China); Wanjing Huang (University of Electronic Science and Technology of China ); Qiaosen Dong (Sichuan University); Chong Wang (University of Electronic Science and Technology of China); Guoqing Wang (University of Electronic Science and Technology of China)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Cross-subject mental fatigue detection via Electroencephalography (EEG) is challenging because EEG from different individuals varies greatly. Existing works have exploited domain adaption to alleviate the individual discrepancy due to personality, gender and so on. However, the distributions of data from new subjects and old ones are aligned by deceiving the domain discriminator. An inevitable issue of such a paradigm is that the samples near the decision boundary are easy to be misclassified. To address this issue, we propose a Separable Spatio-temporal Feature Aggregation (SSFA) that consists of a Spatio-temporal Feature Extractor (SFE) and a Separable Feature Aggregation mechanism (SFA). Specifically, SFE utilizes the spatio-temporal information in EEG and automatically tune the weights of temporal and spatial features, so as to update the model along the optimal direction and obtain more discriminative features. In addition, SFA employs two classifiers combined with sliced Wasserstein Discrepancy to aggregate each separate class together, facilitating the mapping of the new subjects to the support region of the old subjects. Leave-one-subject-out experiments conducted on a public fatigue dataset show that the proposed method performs better than state-of-the-art on many evaluation metrics especially with an accuracy of 85.91%.