Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 07:18
26 Oct 2020

Deep learning-based video facial authentication has limitations when it comes to real-world applications, due to large mode variations such as illumination, pose, and eyeglasses variations in real-life situations. Many of existing mode-invariant facial authentication methods need labels of each mode. However, the label information could not be always available in practice. To alleviate this problem, we develop an unsupervised mode disentangling method for video facial authentication. By matching both disentangled identity features and dynamic features of two facial videos, our proposed method shows significant face verification and identification performances on three publicly available datasets, KAIST-MPMI, UVA-NEMO, and YTF.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00