Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:03
11 May 2022

Recent image inpainting methods based on end-to-end models have achieved great success with the help of generative adversarial training and structure generation. However, the generated conditional structure priors cannot support the model to reconstruct finer texture. This is mainly because these models lack good texture prior knowledge, which leads to their limited ability to generate finer texture. In this paper, we propose a novel detail generation and fusion network (DGFNet) to strengthen the generation of texture details for image inpainting, which includes a dual-stream texture generation network and a multi-scale difference perception fusion network. The dual-stream texture generation network can explicitly model the missing texture information and generate a texture map to compensate for the coarse result produced by the parallel network. Furthermore, to merge two different kinds of information effectively, a fusion network based on the differential perception fusion module (DPFM) is introduced for multi-scale perception fusion in feature level. Extensive qualitative and quantitative experiments on the benchmark dataset show that the proposed DGFNet achieves state-of-the-art performance.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00