Cu-Net+: Deep Fully Interpretable Network For Multi-Modal Image Restoration
Jingyi Xu, Xin Deng, Mai Xu, Pier Luigi Dragotti
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:15
The network interpretability is critical in computer vision related tasks, especially for the tasks with multiple modalities. For multi-modal image restoration, one recent method, CU-Net, provides an interpretable network based on a multimodal convolutional sparse coding model. However, its network architecture cannot fully interpret the model. In this paper, we propose to turn the model to networks using recurrent scheme, leading to a fully interpretable network namely CU-Net+. In addition, we relax the constraint on the common and unique feature numbers in CU-Net, for making it more consistent with real condition. The effectiveness of the proposed CU-Net+ is evaluated on RGB guided depth image super-resolution and flash guided non-flash image denoising tasks. The numerical results show that CU-Net+ outperforms other interpretable or non-interpretable methods, with 0.16 RMSE and 0.66 dB PSNR improvement than CU-Net on the two tasks, respectively.