Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:40
27 Oct 2020

With the rapid growth of multimedia data, the cross-modal retrieval problem has attracted a lot of interest in both research and industry in recent years. However, the inconsistency of data distribution from different modalities makes such task challenging. In this paper, we propose Semantically Supervised Maximal Correlation (S2MC) method for cross-modal retrieval by incorporating semantic label information into the traditional maximal correlation framework. Combining with maximal correlation based method for extracting unsupervised pairing information, our method effectively exploits supervised semantic information on both common feature space and label space. Extensive experiments show that our method outperforms other current state-of-the-art methods on cross-modal retrieval tasks on three widely used datasets.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00