Visual Relationship Detection with a Deep Convolutional Relationship Network
Yaopeng Peng, Danny Z. Chen, Lanfen Lin
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:17
Visual relationship is crucial to image understanding and can be applied to many tasks(e.g., image caption and visual question answering). Despite great progress on many vision tasks, relationship detection remains a challenging problem due to the complexity of modeling the widely spread and imbalanced distribution of triplets. In this paper, we propose a new framework to capture the relative positions and sizes of the subject and object in the feature map and add a new branch to filter out some object pairs that are unlikely to have relationships. In addition, an activation function is trained to increase the probability of some feature maps given an object pair. Experiments on two large datasets, the Visual Relationship Detection (VRD) and Visual Genome (VG) datasets, demonstrate the superiority of our new approach over state-of-the-art methods. Further, ablation study verifies the effectiveness of our techniques.