Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:09
28 Oct 2020

Recent progresses on Facial Expression Recognition (FER) heavily rely on deep learning models trained with large scale datasets. However, large-scale facial expression datasets always suffer from annotation uncertainties caused by ambiguous expressions, low-quality facial images, and the subjectiveness of annotators, which limits FER performance. To address this challenge, this paper introduces novel Rayleigh and weighted-softmax loss from two aspects. First, we propose Rayleigh loss to extract discriminative representation, which aims at minimizing within-class distances and maximizing inter-class distances simultaneously. Moreover, Rayleigh loss has a Euclidean form which make it easily be optimized with SGD and be combined with other forms. Second, we introduce a weight to measure the uncertainty of a given sample, by considering its distance to class center. Extensive experiments on RAF-DB, FERPlus and AffectNet show the effectiveness of our method with SOTA performance.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00