Skip to main content

CROSS-MODAL ADVERSARIAL CONTRASTIVE LEARNING FOR MULTI-MODAL RUMOR DETECTION

Ting Zou (Soochow University); Zhong Qian (Soochow University); Peifeng Li (Soochow University); Qiaoming Zhu (Soochow University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

With the rapid development of social media, rumor detection on social media has become vitally crucial. Multi-modal fusion and representation play an important role in Multi-modal Rumor Detection (MRD). However, few works learn multi-modal invariant feature and discover the multi-modal class distribution with discrimination loss at the same time. In this paper, we propose a Cross-Modal Adversarial Contrastive (CMAC) fusion strategy, in which adversarial learning is used to align the latent feature distribution of text and image, and contrastive learning is used to align the feature distribution among multi-modal samples of the same category. Adversarial and contrastive learning are combined to obtain multi-modal fusion representations with modality invariance and clear class distributions. Experimental results on two common benchmark datasets show that our approach achieves better results than other advanced models.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00