Skip to main content

CASCADED DETAIL-AWARE NETWORK FOR UNSUPERVISED MONOCULAR DEPTH ESTIMATION

Xinchen Ye, Mingliang Zhang, Xin Fan, Rui Xu, Juncheng Pu, Ruoke Yan

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 08:56
09 Jul 2020

Existing unsupervised learning methods usually reformulate the depth estimation into the image reconstruction problem by training on stereo image pairs to circumvent the need of dense labeled ground truth depth information. Most of them are designed based on a simple encoder-decoder backbone architecture, which has limited expression for context information and suffers from the loss of depth details. In this paper, we propose a cascaded detail-aware network which contains a contextual network (CN) followed by consecutive spatial networks (SNs) to make an unsupervised coarse-to-fine prediction. CN aims to provide good initialized depth estimation results by introducing a multi-scale attention fusion module to enhance the ability of feature representation. Then, SN is progressively applied on the coarse depth map to produce refined depth outputs by exploiting abundant spatial details from input color image. Moreover, we design a robust loss function that further considers the penalty of photometric errors and the occlusion, and strengthens the recovery of spatial details for better depth estimation. Experimental results show that the proposed method achieves promising performance.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00