Search for efficient deep visual-inertial odometry through neural architecture search
Yu Chen (University of Michigan); Mingyu Yang (University of Michigan); Hun Seok Kim (Nil)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Recent deep learning based visual-inertial odometry (VIO) systems achieve impressive performance in various applications and challenging scenarios. However, it is difficult to deploy such VIO models directly on energy-constrained mobile platforms in real-time due to the extensive complexity of existing deep neural network (DNN) models. To address this issue, we propose to adopt the neural architecture search (NAS) technique to search for the most efficient VIO network architecture. Targeting the lowest number of operations and inference latency, our searched models achieve up to 97.4% complexity reduction with no performance degradation. The searched efficient visual encoder allows our VIO model to run at 83.3 frames per second on a single laptop CPU core. Moreover, the model complexity can be reduced by 99.1% when combined with a dynamic modality selection technique. Our searched efficient VIO models are available at https://github.com/unchenyu/NASVIO.