Skip to main content

DEEP RANK CROSS-MODAL HASHING WITH SEMANTIC CONSISTENT FOR IMAGE-TEXT RETRIEVAL

Xiaoqing Liu, Huanqiang Zeng, Yifan Shi, Jianqing Zhu, Kai-Kuang Ma

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:52
11 May 2022

Cross-modal hashing retrieval approaches maps heterogeneous multi-modal data into a common hamming space to achieve efficient and flexible retrieval performance. However, existing cross-modal methods mainly exploit feature-level similarity between multi-modal data, the label-level similarity and relative ranking relationship between adjacent instances have been ignored. To address these problems, we propose a novel Deep Rank Cross-modal Hashing (DRCH) method that fully explores the intra-modal semantic similarity relationship. Firstly, DRCH preserves semantic similarity by combining both label-level and feature-level information. Secondly, the inherent gap between modalities are narrowed by proposing a ranking alignment loss function. Finally, the compact and efficient hash codes are optimized from the common semantic space. Extensive experiments on two real-world image-text retrieval datasets demonstrate the superiority of DRCH compared with several state-of-the-art methods.