Back To Old Constraints To Jointly Supervise Learning Depth, Camera Motion and Optical Flow in A Monocular Video
Hicham Sekkati, Jean-Francois Lapointe
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:00
in 2021, the Transformer based models have demonstrated extraordinary achievement in the field of computer vision. Among which, Maskformer, a Transformer based model adopting the mask classification method, is an outstanding model in both semantic segmentation and instance segmentation. Considering the specific characteristics of semantic segmentation of remote sensing images(RSIs), we design CADA-MaskFormer(a Mask classification-based model with Cross-shaped window self-Attention and Densely connected feature Aggregation) based on Maskformer by improving it's encoder and pixle decoder. Concretely, the mask classification that generates one or even more mask for specific category to perform the elaborate segmentation is especially suitable for handling the characteristic of large within-class and small between-class variance of remote sensing images. Furthermore, we apply the Cross-Shaped Window self-attention mechanism to model the long-range context information contained in RSIs at maximum extent without the increasing of computational complexity. in addition, the densely connected feature aggregation module(DCFAM) is used as the pixel decoder to incorporate multi-level feature maps from the encoder to get a finer semantic segmentation map. Extensive experiments conducted on two remotely sensed semantic segmentation datasets Potsdam and Vaihingen achieves 91.88% and 91.01% in OA index respectively, outperforming most of competitive models designed for remote sensing images.