Imrnet: An Iterative Motion Compensation And Residual Reconstruction Network For Video Compressed Sensing
Xin Yang, Chunling Yang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:09
The traditional video compressed sensing (VCS) algorithms have elegant theoretical interpretability. However, the deterministic sparse transformation used in these algorithms usually can not satisfy the sparsity need, which results in poor reconstruction quality. Also, the optimization process is slow. Deep learning can learn data-driven transformation while achieving fast reconstruction. This paper proposes an iterative motion compensation and residual reconstruction network for VCS, called ImrNet. ImrNet follows the iterative optimization method of MC-BCS-SPL framework, and each module in ImrNet is trained independently. In addition, we design a motion-compensated network (MCNet) to achieving adaptive fusion among frames by using the image semantic segmentation method to obtain the probability map as fusion weights of each frame. The proposed MCNet is used to generate fusion-compensated frame in each iteration of ImrNet. Experimental results show that ImrNet can achieve good reconstruction results with only two iterations. Its reconstruction quality is better than state-of-the-art VCS methods.
Chairs:
Yuvraj Parkale