Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 11:08
27 Oct 2020

Cross-modal retrieval aims to provide flexible retrieval results across different types of multimedia data. To confrontwith scalability issue, binary codes learning (a.k.a. hash technique) is advocated since it permits exact top-K retrieval with sub-linear time complexity. In this paper, we proposea new method called Semi-supervised Graph ConvolutionalHashingnetwork (SGCH), which tries to learn a common hamming space by preserving both intra-modality and inter-modality similarities via an end-to-end neural network. On one hand, graph convolutional network is utilized to explore high-order intra-modality similarity, and simultaneously propagate the semantic information from labeled samples to unlabeled data. On the other hand, a siamese network is connected to project the learnt features into a common hamming space. To bridge the inter-modality gap, adversarial loss which aims to learn modality-independent features by confusing a modality classifier is incorporated into the overall loss function. Experimental evaluations on cross-media retrieval tasks demonstrate that SGCH performs competitively against the state-of-the-art methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00