Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:59
08 Jun 2021

Practically anyone can generate a realistic looking deepfake. The online prevalence of such fake videos will erode the societal trust in video evidence. To counter the looming threat, methods to detect deepfakes were recently proposed by the research community. However, it is still unclear how realistic deepfake videos are for an average person and whether the algorithms are significantly better than humans at detecting them. Therefore, this paper, presents a subjective study by using 60 naive subjects to evaluate how hard it is for humans to see if a video is a deepfake. For the study, 120 videos (60 deepfakes and 60 originals) were manually selected from the Facebook database used in Kaggle's Deepfake Detection Challenge 2020. The results of the subjective evaluation were compared with two state of the art deepfake detection methods, based on Xception and EfficientNet neural networks pre-trained on two other public databases: Google and Jiqsaw subset from FaceForensics++ and Celeb-DF v2 dataset. The experiments demonstrate that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes. Specifically, algorithms struggle to detect deepfake videos which humans found to be very easy to spot.

Chairs:
Marc Chaumont

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00