Skip to main content

SELF-SUPERVISED LEARNING OF DEPTH AND POSE USING CYCLE GENERATIVE ADVERSARIAL NETWORK

Yunhe Tong, Anjie Wang, Songchao Tan, Shanshe Wang, Siwei Ma, Wen Gao

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 06:04
26 Oct 2020

In recent years, large amount of ground truth data is typically required to feed into the supervised depth estimation models to produce satisfactory performance, while it is usually costly and impracticable to acquire depth ground truth. In this paper, we propose a self-supervised joint deep learning pipeline for depth and pose estimation of monocular video sequences, which uses the cycle generation adversarial network structure to extend the existing reconstruction loss function based on photometric consistency. The generation function of the algorithm learns to synthesize the adjacent image to predict the depth map and the relative target pose, and the discriminant function learns the dispersion of the monocular images to correctly classify the realism of the composite image. At the same time, a reconstruction loss function based on pose consistency is used to assist the generator function in training. Extensive experimental results on the KITTI dataset show superior performance of the proposed method.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00