Skip to main content

Class-Aware Contextual Information for Semantic Segmentation

Huadong Tang (University of Technology Sydney); Youpeng Zhao (University of Central Florida); yingying jiang ( Samsung Research China,Beijing); Zhuoxin Gan (Samsung Research Institute China-Beijing (SRC-B);); Qiang Wu (University of Technology Sydney)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Exploring spatial contextual information is a well-adopted approach to achieving better semantic segmentation performance. However, most existing methods neglect the class association between the neighboring pixels. In this paper, we propose a CACINet, which consists of a Semantic Affinity Module (SAM) and a Class Association Module (CAM), to generate class-aware contextual information among pixels on a fine-grained level. SAM analyzes the affiliation of any two given pixels belonging to the same or different class. It produces intra-class and inter-class pixel contextual information. CAM classifies the image into different class regions globally and then it encodes the pixel based on the degree of affiliation of the pixels with each class in the image. In this way, it augments the class affiliation of the pixels into the corresponding context calculation. Comprehensive experiments demonstrate that the proposed method achieves competitive performance on two semantic segmentation benchmarks: ADE20K and PASCAL-Context.