LEARNING DISCRIMINATIVE REPRESENTATION FOR FACIAL EXPRESSION RECOGNITION FROM UNCERTAINTIES
Xingyu Fan, Zhongying Deng, Kai Wang, Xiaojiang Peng, Yu Qiao
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:09
Recent progresses on Facial Expression Recognition (FER) heavily rely on deep learning models trained with large scale datasets. However, large-scale facial expression datasets always suffer from annotation uncertainties caused by ambiguous expressions, low-quality facial images, and the subjectiveness of annotators, which limits FER performance. To address this challenge, this paper introduces novel Rayleigh and weighted-softmax loss from two aspects. First, we propose Rayleigh loss to extract discriminative representation, which aims at minimizing within-class distances and maximizing inter-class distances simultaneously. Moreover, Rayleigh loss has a Euclidean form which make it easily be optimized with SGD and be combined with other forms. Second, we introduce a weight to measure the uncertainty of a given sample, by considering its distance to class center. Extensive experiments on RAF-DB, FERPlus and AffectNet show the effectiveness of our method with SOTA performance.