Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:28
03 Oct 2022

Recently, the researches on deep neural networks using self-attention have been actively conducted and shown to be effective. However, self-attention requires a large amount of computational cost. Axial Attention reduces the computational complexity by factorizing 2D self-attention into two 1D self-attentions, but the visualization of activation region was found to be inaccurate compared to CNN. The inaccurate of activation region means that unrelated regions to true class are referred for classification. It will decrease the accuracy of segmentation, which is a pixel-by-pixel classification, in identifying object boundaries. in this study, we attempted to reduce the problem by introducing a Local Embedding Unit to Axial Attention. in addition, by improving the structure of Axial Attention, we were able to improve the accuracy while reducing the computational cost and the number of parameters. Our model achieved useful results on the classification of ImageNet and segmentation of the CamVid.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00