POINT CLOUD SEGMENTATION USING RGB DRONE IMAGERY
Marc WuDunn, James Dunn, Avideh Zakhor
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:23
In recent years, the ubiquity of drones equipped with RGB cameras has made aerial 3D model generation significantly more cost effective than traditional aerial LiDAR-based methods. Most existing aerial 3D point cloud segmentation approaches use geometric methods and are tailored to 3D LiDAR data. In this paper, we propose a pipeline for semantic segmentation of 3D point clouds obtained via photogrammetry from aerial RGB camera images. Our basic approach is to directly apply deep learning segmentation methods to the very RGB images used to create the point cloud itself, followed by back-projecting the pixel class in segmented images onto the 3D points. This is a particularly attractive solution, since deep learning methods for image segmentation are more mature and advanced as compared to 3D point cloud segmentation. Furthermore, GPU engines for 2D image convolutions are likely to result in higher processing speeds than could be achieved using 3D point cloud data. We demonstrate our segmentation approach on two RGB Drone image datasets captured in Alameda, California, and compare its performance with manually labelled ground truth data. We use F1 and Jaccard similarity coefficient scores to show that our methodology outperforms existing, commercially available packages such as Pix4D.