Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:32
04 Oct 2022

Existing deep inpainting methods often generate contents with blurry textures and distorted structures, which manifests to well fuse the high-level structures and the low-level textures remains challenging. Meanwhile, current inpainting networks just process features at the encoder side but neglect the strong generative abilities in decoder pathway. To address above challenges, we propose a novel Dual Path Cross-Scale Attention Network (DPCSAN). It adopts U-Net architecture as backbone. Based on contextual attention, a Cross-Scale Attention Module (CSAM) is designed to transfer high-level structure information to adjacent low-level features across scales. CSAMs are equipped between each encoder layer and the symmetric decoder layer at multiple levels. The purpose of this design is to fill holes in shallower feature maps and to refine features generated by upsampling (with higher resolution). Experiments on Facade and CelebA-HQ demonstrate that our model performs better than state-of-the-art methods, while generating richer details and keeping original structures.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00