Skip to main content

FAT: FIELD-AWARE TRANSFORMER FOR 3D POINT CLOUD SEMANTIC SEGMENTATION

Junjie Zhou, Yongping Xiong, Chinwai Chiu, Fangyu Liu, Xiangyang Gong

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 10 Oct 2023

Transformer models have achieved promising performances in point cloud segmentation. However, most existing attention schemes provide the same feature learning paradigm for all points equally and overlook the enormous difference in size among scene objects. In this paper, we propose the Field-Aware Transformer (FAT) that adjusts the attentive receptive fields for objects of different sizes. Our FAT achieves field-aware learning via two steps: introduce multi-granularity features to each attention layer and allow each point to choose its attentive fields adaptively. It contains two key designs: the Multi-Granularity Attention (MGA) scheme and the Re-Attention module. Extensive experimental results demonstrate that FAT achieves state-of-the-art performances on S3DIS and ScanNetV2 datasets.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00