Dispense Mode For inference To Accelerate Branchynet
Zhiwei Liang, Yuezhi Zhou
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:08
Visual indoor exploration requires agents to explore a room in a limited time. Currently, planning-based solutions have a time-consuming inference stage and require many hand-crafted parameters in different scenes, compared with reinforcement learning (RL) schemes, which solve these problems by automatically updating flexible policies and affording faster inference time. Spurred by the advantages of RL, we introduce Spatial Attention Visual Exploration (SAVE), which is based on Active Neural SLAM (ANS). Specifically, we propose a novel RL-based global planner named Spatial Global Policy (SGP) that utilizes spatial information to promote efficient exploration through global goal guidance. SGP has two major components: a transformer-based spatial-attention module encoding spatial interrelation between the agent and different regions to perform spatial reasoning and a hierarchical spatial action selector to infer global goals for fast training. The map representations are aligned through our spatial adjustor. Experiments on the Habitat photo-realistic simulator demonstrate that SAVE outperforms current planning-based methods and RL variants, reducing at least 10% the processing steps, 15% the repeat ratio, and affording an x2 to x4 faster execution time than planning-based methods.