Reverse Error Modeling For Improved Semantic Segmentation
Christopher Kuhn, Markus Hofbauer, Goran Petrovic, Eckehard Steinbach
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:15:22
Most of previous semi-supervised methods that seek to obtain disentangled representations using variational autoencoders divide the latent representation into two components: the non-interpretable part and the disentangled part that explicitly models the factors of interest. With such models, features associated with high-level factors are not explicitly modeled, and they can either be lost, or at best entangled in the other latent variables, thus leading to bad disentanglement properties. To address this problem, we propose a novel conditional dependency structure where both the labels and their features belong to the latent space. We show using the CelebA dataset that the proposed model can learn meaningful representations, and we provide quantitative and qualitative comparisons with other approaches that show the effectiveness of the proposed method.