CAENet: Using Collaborative Attention Transformer and Add-Boost Strategy for Single Image Deraining
Shengdi Qin (Beijing Jiaotong University); Shunli Zhang (Beijing Jiaotong University); Yu Zhang (Beihang University); Haoyu Gao (Beijing Jiaotong University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
In recent years, the Convolutional Neural Network (CNN) based deraining methods have achieved remarkable results. However, these methods rarely used the long-range context information, and thus could not effectively restore the regions damaged by dense rain streaks. Moreover, the rain streaks in an images are usually complex and diverse while few methods fully explore the richness of the information which may improve the network’s feature representation ability. To solve the above issues, we propose a novel Collaborative Attention Enhanced Network (CAENet) for single image deraining. We first design a Residual Collaborative Attention Transformer (RCAT), consisting of several Collaborative Attention Transformer Blocks (CATBs), to effectively build the long-range dependency relations. The CATB employs the self-attention mechanism to recover the contextual information of the derained images and embeds the outline feature with global attention. Further, we develop an Add-Boost Module (ABM) by aggregating the features in different resolutions, with which more details covered by the rain streaks can be effectively restored. Experiments on synthetic and real-world datasets show that our method achieves excellent rain removal performance and outperforms seven state-of-the-art methods ,in terms of both quantitative evaluation metrics and qualitative visualization effects.