Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 06:51
26 Oct 2020

Convolutional neural networks (CNNs) have been widely used in image super-resolution (SR). Most existing CNN-based methods focus on achieving better performance by designing deeper/wider networks, while suffering from heavy computational cost problem, thus hindering the deployment of such models in mobile devices with limited resources. To relieve such problem, we propose a novel and efficient SR model, named Feature Affinity-based Knowledge Distillation (FAKD), by transferring the structural knowledge of a heavy teacher model to a lightweight student model. To transfer the structural knowledge effectively, FAKD aims to distill the second-order statistical information from feature maps and trains a lightweight student network with low computational and memory cost. Experimental results demonstrate the efficacy of our method and the effectiveness over other knowledge distillation based methods in terms of both quantitative and visual metrics.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00