Skip to main content

An Efficient Axial-Attention Network For Video-Based Person Re-Identification

Fuping Zhang, Tianzhao Zhang, Ruoxi Sun, Chao Huang, Jianming Wei

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:08
03 Oct 2022

Motion deblurring is challenging due to the fast movements of the object or the camera itself. Existing methods usually try to liberate it by training CNN model or Generative Adversarial Networks(GAN). However, their methods can?t restore the details very well. in this paper, a Deblurring Transformer based on Generative Adversarial Network(DTransGAN) is proposed to improve the deblurring performance of the vehicles under the surveillance camera scene. The proposed DTransGAN combines the low-level information and the high-level information through skip connection, which saves the original information of the image as much as possible to restore the details. Besides, we replace the convolution layer in the generator with the swin transformer block, which could pay more attention to the reconstruction of details. Finally, we create the vehicle motion blur dataset. It contains two parts, namely the clear image and the corresponding blurry image. Experiments on public datasets and the collected dataset report that DTransGAN achieves the state-of-the-art for motion deblurring task.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00