Skip to main content

DEPTH ESTIMATION FROM SINGLE IMAGE AND SEMANTIC PRIOR

Praful Hambarde, Akshay Dudhane, Prashant Patil, Subrahmanyam Murala, Abhinav Dhall

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:26
27 Oct 2020

The multi-modality sensor fusion technique is an active research area in scene understating. In this work, we explore the RGB image and semantic-map fusion methods for depth estimation. The LiDARs, Kinect, and TOF depth sensors are unable to predict the depth-map at illuminate and monotonous pattern surface. In this paper, we propose a semantic-to-depth generative adversarial network (S2D-GAN) for depth estimation from RGB image and its semantic-map. In the first stage, the proposed S2D-GAN estimates the coarse level depth-map using a semantic-to-coarse-depth generative adversarial network (S2CD-GAN) while the second stage estimates the fine-level depth-map using a cascaded multi-scale spatial pooling network. The experimental analysis of the proposed S2D-GAN performed on NYU-Depth-V2 dataset shows that the proposed S2D-GAN gives outstanding result over existing single image depth estimation and RGB with sparse samples methods. The proposed S2D-GAN also gives efficient results on the real-world indoor and outdoor image depth estimation.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00