Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 05:00
23 Sep 2020

In recent years, a variety of visual SLAM (Simulta- neous Localization and Mapping) systems have been proposed. These systems allow camera-equipped agents to create a map of the environment and determine their position within this map, even without an available GPS signal. Visual SLAM algorithms differ mainly in the way the image information is processed and whether the resulting map is represented as a dense point cloud or with sparse feature points. However, most systems have in common that still a high computational effort is necessary to create an accurate, correct and up-to-date pose and map. This is a challenge for smaller mobile agents with limited power and computing resources.

In this paper, we investigate how the processing steps of a state- of-the-art feature-based visual SLAM system can be distributed among a mobile agent and an edge-cloud server. Depending on the specification of the agent, it can run the complete system locally, offload only the tracking and optimization part, or run nearly all processing steps on the server. For this purpose, the individual processing steps and their resulting data formats are examined and methods are presented how they can be efficiently transmitted to the server. Our experimental evaluation shows that the CPU load can be reduced for all task distributions which offload part of the pipeline to the server. For agents with low computing power, the processing time for the pose estimation can even be reduced. In addition, the higher computing power of the server allows to increase the frame rate and accuracy for pose estimation.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00