Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 07:26
09 Jul 2020

Generalized zero-shot learning (GZSL) for image classification is a challenging task since not only training examples from novel classes are absent, but also classification performance is judged on both seen and unseen classes. This setting is vital in realistic scenarios where the vast labeled data are not easily available. Some existing methods for GZSL utilize latent features learned through variational autoencoder (VAE) for recognizing novel classes, while few have solved the problem that image features have large intra-class variance affecting the quality of latent features. Hence we propose to match the soul samples to shorten the variance regularized by the pre-trained classifiers, which enables the VAE to generate much more discriminative latent features to train the softmax classifier. We evaluate our method on four benchmark datasets, i.e. CUB, SUN, AWA1, AWA2, and experimental results demonstrate that our model achieves the new state-of-the-art in generalized zero-shot and few-shot learning settings.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00