Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:29
04 May 2020

Learning to hash via generative models has become a promising paradigm for fast similarity search in document retrieval. The binary hash codes are treated as Bernoulli latent variables when training a variational autoencoder (VAE). However, the prior of discrete distribution (i.e., Bernoulli distribution) is short of some structural regularization to generate more efficient binary codes. In this paper, we present an end-to-end Wasserstein Autoencoder (WAE) for text hashing to avoid indifferentiable operators in the reparameterization trick, where the latent variables can be imposed to any discrete priors we can sample by using adversarial learning. Moreover, we can generate more efficient discrete codes by imposing a structural constraint on priors like the bits balance constraint. Our experiments show that the proposed model is competitive to the state of the art methods on both unsupervised and supervised scenarios.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00