Skip to main content

Semantic Preserving Generative Adversarial Network For Cross-Modal Hashing

Fei Wu, Xiaokai Luo, Qinghua Huang, Pengfei Wei, Ying Sun, Xiwei Dong, Zhiyong Wu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:12:12
20 Sep 2021

Cross-modal hashing has achieved significant progress in recent years. However, how to effectively learn more discriminative hash codes of each modality and simultaneous alleviate the loss of modality information is still a challenging problem. Focusing on this problem, in this paper, we propose a novel cross-modal hashing approach named Semantic Preserving Generative Adversarial Network (SPGAN). The overall network architecture consists of two sub-networks, i.e., a semantic preserving generative adversarial network module and a discriminative hashing module. The generator maps text features into the image feature space. And the discriminator judges whether the feature representations are real image features or generated image features. The adversarial learning process can effectively reduce modality difference and preserve information of the image modality as much as possible. The discriminative hashing module projects the real and generated image features into a Hamming space to obtain hash codes, and explores semantic similarities for enhancing the discriminant ability of hash codes. Experiments on two widely used datasets demonstrate that SPGAN can outperform state-of-the-art related works.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00