Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:33
21 Sep 2021

We present a new visual-semantic embedding method for generalized zero-shot learning. Different to existing embedding-based methods that learn the correspondence between an image classifier and its class prototype for each class, we learn the mapping between an image and its semantic classifier. Given an input image, the proposed method creates a label classifier and applies it to all label embeddings to determine whether a label belongs to the input image. Therefore, a semantic classifier is image conditioned and is generated during inference. We validate our approach with four standard benchmark datasets.

Value-Added Bundle(s) Including this Product

More Like This