Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:12
08 Jul 2020

The main challenge of sketch-based 3D shape retrieval is the large cross-modal differences between 2D sketches and 3D shapes. Most recent works employed two heterogeneous networks and a shared loss to directly map the features from different modalities to a common feature space, which failed to reduce the cross-modal differences effectively. In this paper, we propose a novel method that adopts a teacher-student strategy to learn an aligned cross-modal feature space indirectly. Specifically, our method first employs a classification network to learn the discriminative features of 3D shapes. Then, the pre-learned features are considered as a teacher to guide the feature learning of 2D sketches. In order to align the cross-modal features, 2D sketch features are transferred to the pre-learned 3D feature space. Our experiments on two benchmark datasets demonstrate that our method obtains superior retrieval performance than the state-of-the-art approaches

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00