Video Frame Interpolation Via Residue Refinement
Haopeng Li, Yuan Yuan, Qi Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 15:04
Video frame interpolation achieves temporal super-resolution by generating smooth transitions between frames. Although great success has been achieved by deep neural networks, the synthesized images stills suffer from poor visual appearance and unsatisfied artifacts. In this paper, we propose a novel network structure that leverages residue refinement and adaptive weight to synthesize in-between frames. The residue refinement technique is used for optical flow and image generation for higher accuracy and better visual appearance, while the adaptive weight map combines the forward and backward warped frames to reduce the artifacts. Moreover, all sub-modules in our method are implemented by U-Net with less depths, so the efficiency is guaranteed. Experiments on public datasets demonstrate the effectiveness and superiority of our method over the state-of-the-art approaches.