Deep Learning based Landmark Matching for Aerial Geolocalization
Koundinya Nouduri, Filiz Bunyak, Shizeng Yao, Hadi AliAkbarpour, Sanjeev Agarwal, Raghuveer Rao, Kannappan Palaniappan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:59
Visual odometry has gained increasing attention due to the proliferation of unmanned aerial vehicles, self-driving cars, and other autonomous robotics systems. Landmark detection and matching are critical for visual localization. While current methods rely upon point-based image features or descriptor mappings we consider landmarks at the object level. In this paper, we propose LMNet a deep learning-based landmark matching pipeline for city-scale, aerial images of urban scenes. LMNet consists of a Siamese network, extended with a multi-patch based matching scheme, to handle off-center landmarks, varying landmark scales, and occlusions of surrounding structures. While there exist a number of land-mark recognition benchmark datasets for ground-based and nadir aerial or satellite imagery, there is a lack of dataset sand results for oblique aerial imagery. We use a unique unsupervised multi-view landmark image generation pipeline for training and testing the proposed matching pipeline using over 0.5 million real landmark patches. Results for aerial landmark matching across four cities show promising results.