Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:51
27 Oct 2020

Cross-modal retrieval is an important field of study for design of algorithms to effectively retrieve items from one modality when provided with a query from another modality. Recent progress in this field have shown that supervised algorithms perform significantly better than their unsupervised counterparts by utilizing the label information. In real scenarios, the labels are obtained through manual or automatic annotation, and thus are prone to errors. In this work, we systematically study the effect of label corruption on the performance of standard cross-modal algorithms. We propose a very simple, yet effective pre-processing framework which can help to mitigate the performance degradation suffered due to label corruption. First, the potentially more promising modality is automatically chosen, on which two different versions of a noise resistant classification algorithm is trained to generate the pseudo-labels of the noisy cross-modal training data. The generated pseudo-labels can then be used by any cross-modal supervised approach to improve its performance. Extensive experiments across four cross-modal datasets with different types of label corruption show that the proposed framework gives impressive improvements for this important problem.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00