BOOSTING IMAGE-BASED LOCALIZATION VIA RANDOMLY GEOMETRIC DATA AUGMENTATION
Yiming Wan, Wei Gao, Sheng Han, Yihong Wu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:26
Visual localization is a fundamental problem in computer vision and robotics. Recently, deep learning has shown to be effective for robust monocular localization. Most deep learning-based methods utilize convolution neural network (CNN) to regress global 6 degree-of-freedom (Dof) pose. However, these methods suffer from pose sparsity, leading to over-fitting during training and poor localization performance on unseen data. In this paper, we try to alleviate this issue by implementing randomly geometric augmentation (RGA) during training. Specifically, we firstly estimate the depth map using a depth estimation network for the initial training image. Combing the estimated depth, RGB image and its corresponding pose, we can randomly synthesize new images of different views. The synthesized and initial images are used to train the pose regression network. Experiment results show our geometric augmentation strategy can significantly improve the localization accuracy.