Cross-Modal Guidance Network for Sketch-Based 3D shape Retrieval
Weidong Dai, Shuang Liang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:12
The main challenge of sketch-based 3D shape retrieval is the large cross-modal differences between 2D sketches and 3D shapes. Most recent works employed two heterogeneous networks and a shared loss to directly map the features from different modalities to a common feature space, which failed to reduce the cross-modal differences effectively. In this paper, we propose a novel method that adopts a teacher-student strategy to learn an aligned cross-modal feature space indirectly. Specifically, our method first employs a classification network to learn the discriminative features of 3D shapes. Then, the pre-learned features are considered as a teacher to guide the feature learning of 2D sketches. In order to align the cross-modal features, 2D sketch features are transferred to the pre-learned 3D feature space. Our experiments on two benchmark datasets demonstrate that our method obtains superior retrieval performance than the state-of-the-art approaches