DTRANSGAN: Deblurring Transformer Based On Generative Adversarial Network
Kai Zhuang, Yuan Yuan, Qi Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:46
Real-time semantic segmentation plays a significant role in many real-world applications. However, existing methods usually neglect the importance of aggregating global scene clues and multi-level semantics due to computational limits of mobile devices. To address the above challenges and maintain higher accuracy, we propose an efficient attention-augmented network, namely EANet. Specifically, we first leverage an extremely lightweight attention module called sparse strip attention module (SSAM) to retain global contextual information while greatly reducing computation cost. Moreover, the meticulously designed joint attention fusion module (JAFM) follows an attention strategy to efficiently integrate semantics and details from multi-level features. On Cityscapes test set, our network achieves 74.6% mIoU at 35.4 FPS on a single GTX1080Ti GPU with a 1024?2048-pixel image. Extensive experiments show that our EANet achieves promising results on Cityscapes dataset.