Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 08:25
06 Jul 2020

Thanks to the powerful feature learning capabilities of deep learning, some studies have introduced GANs into the cross-modal hashing. However, The GAN-based hashing methods are generally unstable and difficult to train in the process of adversarial learning. To address this problem, we propose a novel AutoEncoder Semantic Adversarial Hashing for cross-modal retrieval (AESAH). Specifically, under the guidance of semantic multi-label, two types of adversarial autoencoder networks (inter-modality and intra-modality) are adopted to maximize the semantic relevance and maintain the invariance of cross-modal. Under semantic supervised, the adversarial modules guide the feature learning process, thus the modal relationship in both the common feature space and the common hamming space is maintained. Furthermore, in order to preserve the inter-modal correlation of all similar item pairs is higher than those of dissimilar ones, we use an inter-modal invariance triplet loss and a classification prediction loss to maintain similarity.Comprehensive experiments were carried out on two commonly used cross-modal datasets, compared with several existing cross-modal retrieval methods, AESAH has better retrieval performance.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00