Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:15
21 Sep 2021

The network interpretability is critical in computer vision related tasks, especially for the tasks with multiple modalities. For multi-modal image restoration, one recent method, CU-Net, provides an interpretable network based on a multimodal convolutional sparse coding model. However, its network architecture cannot fully interpret the model. In this paper, we propose to turn the model to networks using recurrent scheme, leading to a fully interpretable network namely CU-Net+. In addition, we relax the constraint on the common and unique feature numbers in CU-Net, for making it more consistent with real condition. The effectiveness of the proposed CU-Net+ is evaluated on RGB guided depth image super-resolution and flash guided non-flash image denoising tasks. The numerical results show that CU-Net+ outperforms other interpretable or non-interpretable methods, with 0.16 RMSE and 0.66 dB PSNR improvement than CU-Net on the two tasks, respectively.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00