A Novel Self-Supervised Cross-Modal Image Retrieval Method in Remote Sensing
Gencer Sumbul, Markus Müller, Begüm Demir
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:35
in this paper, we propose SYGNet to strengthen the scene parsing ability of autonomous driving under complicated road conditions. The SYGNet includes feature extraction component and SVD-YOLO GhostNet component. The SVD-YOLO GhostNet component combines Singular Value Decomposition (SVD), You Only Look Once (YOLO), and GhostNet. in the feature extraction component, we pro- pose an algorithm based on VoxelNet to extract point cloud features and image features. in SVD-YOLO GhostNet component, the image data is decomposed by SVD, and we obtain data with stronger spatial and environmental characteristics. YOLOv3 is used to obtain the future map, then convert to GhostNet, which is used to realize the real-time scene parsing. We use the KITTI data set to perform our experiments and the results show that the SYGNet is more robust and can further enhance the accuracy of real-time driving scene parsing. The model code, data set, and results of the experiments in this paper are available at: https://github.com/WangHewei16/SYGNet- for-Real-time-Driving-Scene-Parsing.