Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:10
27 Oct 2020

In this work, we target a known problem in representation learning that is: beyond coarse classification, how can we better model fine- grained categorization? To address this problem, we introduce Deep Subclass Linear Discriminant Analysis (DeepSDA), which utilizes intra-class variation and inter-class similarity during training. We could achieve multimodal classification by maximizing the ratio of between-subclass scatter matrix and within-subclass scatter matrix. We maximize the eigenvalues along the discriminative eignevector directions. Hence the deep neural network is able to learn more dis- criminative representation space and thus has higher class separation in the linearly separable latent space. We show that DeepSDA leads to significant improvements on diverse fine-grained categorization and attribute learning benchmarks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00