Skip to main content

UNSUPERVISED VISUAL RELATIONSHIP INFERENCE

Taiga Kashima, Kento Masui, Hideki Nakayama

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:53
27 Oct 2020

Visual relationship inference is an essential research area for image understanding. Owing to the recent advancement of deep learning, significant signs of progress have been made in this challenging area. Standard approaches attempt to recognize visual relationships based on supervised learning by employing a carefully annotated dataset, in which images, triplets (subject-predicate-object), and bounding boxes are attached. However, preparing a large-scale dataset is very time consuming. This study proposes a novel method to infer visual relationships without image-triplet pairs. Our method tries to keep cycle consistency and plausibility of the inferred triplets. Our experimental results demonstrate that this method can infer predicates between objects in unpaired settings, and also achieving promising results using triplets parsed from external image descriptions.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00