Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:40
03 Oct 2022

This paper makes efforts in improving the efficiency of deep networks for single image deraining with a newly proposed knowledge distillation framework. Specifically, we propose a rain-prior injected distillation scheme to transfer the knowledge from a large-scale teacher network to a more compact student network. Previous works directly calculate the distillation loss between the features extracted from the student and teacher networks. Differently, our distillation scheme adaptively removes the noisy background patterns by calculating the distillation loss based on the residual feature, which is inferred from the features extracted from the rain and ground truth images. This residual operation makes the student network focus on transferring only the knowledge on the rain streaks instead of the background, which facilitates more effective distillation results. Furthermore, our method can be applied to reduce both the network size and the deraining recurrence stage, which makes it a plug-and-play module that can be integrated into diverse existing deraining methods. Experimental results prove the efficiency of our method to build an efficient deraining network and the superiority over existing distillation methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00