Skip to main content

CROSS-MODAL KNOWLEDGE DISTILLATION IN MULTI-MODAL FAKE NEWS DETECTION

Zimian Wei, Hengyue Pan, Linbo Qiao, Xin Niu, Peijie Dong, Dongsheng Li

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:54
10 May 2022

Since the rapid dissemination of fake news brings a lot of negative effects on real society, automatic fake news detection has attracted increasing attention in recent years. In most circumstances, the fake news detection task is a multi-modal problem that consists of textual and visual contents. Many existing methods simply integrate the textual and visual features as a shared representation but overlook their correlations, which may lead to sub-optimal results. To address this problem, we propose CMC, a two-stage fake news detection method with a novel knowledge distillation that captures Cross-Modal feature Correlations while training. In the first stage of CMC, the textual and visual networks are trained mutually in an ensemble learning paradigm. The proposed cross-modal knowledge distillation function is presented as a soft target to guide the training of a single-modal network with the correlations from the other peer. In the second stage of CMC, the two well-trained networks are fixed, and their extracted features are fed to a fusion mechanism. The fusion model is then trained to further improve the performance of multi-modal fake news detection. Extensive experiments on Weibo, PolitiFact, and GossipCop databases show that CMC outperforms the existing state-of-the-art methods by a large margin.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00