FAKD: FEATURE-AFFINITY BASED KNOWLEDGE DISTILLATION FOR EFFICIENT IMAGE SUPER-RESOLUTION
Zibin He, Tao Dai, Jian Lu, Yong Jiang, Shu-Tao Xia
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 06:51
Convolutional neural networks (CNNs) have been widely used in image super-resolution (SR). Most existing CNN-based methods focus on achieving better performance by designing deeper/wider networks, while suffering from heavy computational cost problem, thus hindering the deployment of such models in mobile devices with limited resources. To relieve such problem, we propose a novel and efficient SR model, named Feature Affinity-based Knowledge Distillation (FAKD), by transferring the structural knowledge of a heavy teacher model to a lightweight student model. To transfer the structural knowledge effectively, FAKD aims to distill the second-order statistical information from feature maps and trains a lightweight student network with low computational and memory cost. Experimental results demonstrate the efficacy of our method and the effectiveness over other knowledge distillation based methods in terms of both quantitative and visual metrics.