Mask Guided Spatial-Temporal Fusion Network For Multiple Object Tracking
Shuangye Zhao, Yubin Wu, Shuai Wang, Wei Ke, Hao Sheng
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:43
The distortion of high-frequency information is the most fundamental problem of dynamic scene blur, which leads to the degradation of image quality. However, most deep-based methods fail to show satisfactory results because of ignoring the importance of image structural information (high-frequency and low-frequency preception) in deblurring. in this paper, we propose a high-frequency and low-frequency information fusion deblurring network (HLFNet) that uses edge perception as a guide. The proposed HLFNet consists of the high-frequency information network (HFNet) and the low-frequency information network (LFNet). Besides, we adopt the proposed multi-scale atrous convolution (MSA) block into LFNet, which can effectively reduce the number of model parameters while expanding the receptive fields. Extensive experiments show that the proposed model can achieve state-of-the-art results with smaller parameters and shorter inference time on the public datasets.