Skip to main content

Single Image Super-Resolution Via Global-Context Attention Networks

Pengcheng Bian, Zhonglong Zheng, Dawei Zhang, Liyuan Chen, Minglu Li

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:05
22 Sep 2021

In the last few years, single image super-resolution (SISR) has benefited a lot from the rapid development of deep convolutional neural networks (CNNs), and the introduction of attention mechanisms further improves the performance of SISR. However, previous methods use one or more types of attention independently in multiple stages and ignore the correlations between different layers in the network. To address these issues, we propose a novel end-to-end architecture named global-context attention network (GCAN) for SISR, which consists of several residual global-context attention blocks (RGCABs) and an inter-group fusion module (IGFM). Specifically, the proposed RGCAB extracts representative features that capture non-local spatial interdependencies and multiple channel relations. Then the IGFM aggregates and fuses hierarchical features of multi-layers discriminatively by considering correlations among layers. Extensive experimental results demonstrate that our method achieves superior results against other state-of-the-art methods on publicly available datasets.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00