Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:49
07 Jul 2020

Depth estimation is a fundamental problem for light field based applications. Although recent learning-based methods have proven to be effective for light field depth estimation, they still have troubles when handling occlusion regions. In this paper, by leveraging the explicitly learned occlusion map, we propose an occlusion-aware network, which is capable of estimating accurate depth maps with sharp edges. Our main idea is to separate the depth estimation on non-occlusion and occlusion regions, as they contain different properties with respect to the light field structure, i.e., obeying and violating the angular photo consistency constraint. To this end, three modules are involved in our network: the occlusion region
detection network (ORDNet), the coarse depth estimation network (CDENet), and the refined depth estimation network (RDENet). Specifically, ORDNet predicts the occlusion map as a mask, while under the guidance of the resulting occlusion
map, CDENet and REDNet focus on the depth estimation on non-occlusion and occlusion areas, respectively. Experimental results show that our method achieves better performance on 4D light field benchmark, especially in occlusion regions,
when compared with current state-of-the-art light-field depth estimation algorithms.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00