Skip to main content

DCM: A DENSE-ATTENTION CONTEXT MODULE FOR SEMANTIC SEGMENTATION

Shenghua Li, Quan Zhou, Jia Liu, Yawen Fan, Xiaofu Wu, Longin Jan Latecki, Jie Wang

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:19
27 Oct 2020

For image semantic segmentation, a fully convolutional network is usually employed as the encoder to abstract visual features of the input image. A meticulously designed decoder is used to decoding the final feature map of the backbone. The output resolution of backbones which are designed for image classification task is too low to match segmentation task. Most existing methods for obtaining the final high-resolution feature map can not fully utilize the information of different layers of the backbone. To adequately extract the information of a single layer, the multi-scale context information of different layers, and the global information of backbone, we present a new attention-augmented module named \emph{Dense-attention Context Module (DCM)}, which is used to connect the common backbones and the other decoding heads. The experiments show the promising results of our method on Cityscapes dataset.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00