Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:29
10 May 2022

Unsupervised anomaly detection is a challenging problem, where the aim is to detect irregular data instances. Interestingly, generative models can learn data distribution, and thus have been proposed for anomaly detection. In this direction, the variational autoencoder (VAE) is popular, as it enforces an explicit probabilistic interpretation of the latent space. We note that there are other generative autoencoders (AEs) such as the denoising AE (DAE) and contractive AE (CAE), which also model data generation process without enforcing an explicit probabilistic latent space interpretation. While it is intuitively straightforward to see the benefit of a latent space with explicit probabilistic interpretation for generative tasks, it is unclear how this can be crucial for anomaly detection problems. Consequently, our exposition in this paper is to investigate the extent to which different latent space attributes of AEs impact their performances for anomaly detection tasks. We take the conventional and deterministic AE that we refer to as plain AE (PAE) as the baseline for performance comparison. Our results obtained using five different datasets reveal that an explicit probabilistic latent space is not necessary for good performance. The best results on most of the datasets are obtained using CAE, which enjoys stable latent representations.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00