FEATURE EXTRACTION FOR VISUAL SPEAKER AUTHENTICATION AGAINST COMPUTER-GENERATED VIDEO ATTACKS
Jun Ma, Shilin Wang, Aixin Zhang, Alan Wee-Chung Liew
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:11
Recent research shows that the lip feature can achieve reliable authentication performance with a good liveness detection ability. However, with the development of sophisticated human face generation methods by the deepfake technology, the talking videos can be forged with high quality and the static lip information is not reliable in such case. Meeting with such challenge, in this paper, we propose a new deep neural network structure to extract robust lip features against human and Computer-Generated (CG) imposters. Two novel network units, i.e. the feature-level Difference block (Diff-block) and the pixel-level Dynamic Response block (DR-block), are proposed to reduce the influence of the static lip information and to represent the dynamic talking habit information. Experiments on the GRID dataset have demonstrated that the proposed network can extract discriminative and robust lip features and outperform two state-of-the-art visual speaker authentication approaches in both human imposter and CG imposter scenarios.