Coupled Adversarial Learning for Single Image Super-Resolution
Chih-Chung Hsu, Kuan-Yu Huang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:45
Generative adversarial nets (GAN) have been widely used in several image restoration tasks such as image denoise, enhancement, and super-resolution. The objective functions of an image super-resolution problem based on GANs usually are reconstruction error, semantic feature distance, and GAN loss. In general, semantic feature distance was used to measure the feature similarity between the super-resolved and ground-truth images, to ensure they have similar feature representations. However, the feature is usually extracted by the pre-trained model, in which the feature representation is not designed for distinguishing the extracted features from low-resolution and high-resolution images. In this study, a coupled adversarial net (CAN) based on Siamese Network Structure is proposed, to improve the effectiveness of the feature extraction. In the proposed CAN, we offer GAN loss and semantic feature distances simultaneously, reducing the training complexity as well as improving the performance. Extensive experiments conducted that the proposed CAN is effective and efficient, compared to state-of-the-art image super-resolution schemes.