Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:25
06 Oct 2022

Existing methods for deepfake face forgery detection have already achieved tremendous progress in well-controlled laboratory conditions. However, under wild scenarios where the training and testing forgeries are synthesized by different algorithms and when labeled data are insufficient, the performance always drops greatly. in this work, we present a Semi-supervised Contrastive Learning and Knowledge Distillation-based framework (SCL-KD) for deepfake detection to reduce the aforementioned performance gap. Our proposed framework contains three stages: self-supervised pre-training, supervised training, and knowledge distillation. Specifically, a feature encoder is firstly trained in a self-supervised manner with a large number of unlabeled samples through a momentum contrastive mechanism. Secondly, a fully-connected classifier on top of the feature encoder is trained in a supervised manner with a small amount of labeled samples to build a teacher model. Finally, a compact student model is trained with the help of the teacher model using knowledge distillation, in order to avoid overfitting to labeled data and have better generalizability on mismatched datasets. Evaluations on several benchmark datasets corroborate the good performance of our approach in cross-dataset situations and few labeled data scenarios. It reveals the potential of our proposed method for real-world deepfake detection.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00