Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:17
21 Sep 2021

Data for supervised learning of ego-motion and depth from video is scarce and expensive to produce. Subsequently, recent work has focused on unsupervised learning methods and achieved remarkable results. Many unsupervised approaches rely on single-view predicted depth and so ignore motion information. Some unsupervised methods incorporate motion information indirectly by designing the depth prediction network as an RNN. However, none of the existing methods make direct use of multiple frames when predicting depth, which are readily available in videos. In this work, we show that it is possible to achieve superior pose prediction results by modeling motion more directly. Our method uses a novel learning-based formulation for depth propagation and refinement which warps predicted depth maps forward from the current frame onto the next frame where it serves as a prior for predicting the next frameƒ??s depth map. Code will be made available upon acceptance.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00