Skip to main content

Hoca: Higher-Order Channel Attention For Single Image Super-Resolution

Yalei Lv, Tao Dai, Bin Chen, Jian Lu, Shu-Tao Xia, Jingchao Cao

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:09
08 Jun 2021

Convolutional neural networks (CNNs) have obtained great success in single image super-resolution (SR). More recent works (e.g., RCAN and SAN) have obtained remarkable performance with channel attention based on first- or second-order statistics of features. However, these methods neglect the rich feature statistics higher than second-order, thus hindering the representation ability of CNNs. To address this issue, we propose a higher-order channel attention (HOCA) module to enhance the representation ability of CNNs. In our HOCA module, to capture different types of semantic information, we first compute k-order of feature statistics, followed by channel attention to learn the feature interdependencies. Considering the diversity of input contents, we design a gate mechanism to adaptively select a specific k-order channel attention. Besides, our HOCA module serves as a plug-and-play module and can be easily plugged into existing state-of-art CNN-based SR methods. Extensive experiments on public benchmarks show that our HOCA module effectively improves the performance of various CNN-based SR methods.

Chairs:
C.-C. Jay Kuo

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00