Hoca: Higher-Order Channel Attention For Single Image Super-Resolution
Yalei Lv, Tao Dai, Bin Chen, Jian Lu, Shu-Tao Xia, Jingchao Cao
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:09
Convolutional neural networks (CNNs) have obtained great success in single image super-resolution (SR). More recent works (e.g., RCAN and SAN) have obtained remarkable performance with channel attention based on first- or second-order statistics of features. However, these methods neglect the rich feature statistics higher than second-order, thus hindering the representation ability of CNNs. To address this issue, we propose a higher-order channel attention (HOCA) module to enhance the representation ability of CNNs. In our HOCA module, to capture different types of semantic information, we first compute k-order of feature statistics, followed by channel attention to learn the feature interdependencies. Considering the diversity of input contents, we design a gate mechanism to adaptively select a specific k-order channel attention. Besides, our HOCA module serves as a plug-and-play module and can be easily plugged into existing state-of-art CNN-based SR methods. Extensive experiments on public benchmarks show that our HOCA module effectively improves the performance of various CNN-based SR methods.
Chairs:
C.-C. Jay Kuo