Image Super-Resolution Using Residual Global Context Network
Kuangye Liu, Zhen Han, Junkui Chen, Chunlei Liu, Jun Chen, Zhongyuan Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:33
Recent studies have showed that convolutional neural networks (CNN) can effectively improve the performance of single image super-resolution (SR). However, previous methods rarely considered long-range dependencies between pixels and channel-wise interdependencies at the same time. They ignores the fact that natural images have strong internal data repetition which requires the network to capture long-range dependencies between pixels and considering the interdependencies between channels can better exploit the input information of the network. In addition, although past studies have proved that deep convolutional neural network benefit the performance of image super-resolution, it also means that the network needs more memory consumption and higher computational complexity. To solve these problem, we introduce Global Context block (GCB) and design a comparative shallow network called Residual Global Context Networks (RGCN). It achieves a better trade-off between the amount of parameter and the quality of image reconstruction. Extensive experiments demonstrate that the proposed method is superior to the state-of-the-art methods.