论文标题

CodeVio:具有可学习的优化密度深度的视觉惯性探针计

CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth

论文作者

Zuo, Xingxing, Merrill, Nathaniel, Li, Wei, Liu, Yong, Pollefeys, Marc, Huang, Guoquan

论文摘要

在这项工作中,我们提出了一个轻巧,紧密耦合的深度网络和视觉惯性进程(VIO)系统,该系统可以提供准确的状态估计和周围环境的密集深度图。利用所提出的轻巧条件变异自动编码器(CVAE)进行深度推理和编码,我们为网络提供了先前边缘化的稀疏特征,以提高初始深度预测和概括能力的准确性。然后,紧凑的编码深度图将与滑动窗口估算器中的导航状态共同更新,以提供密集的本地场景几何形状。我们还提出了一种新的方法来获得CVAE的Jacobian,该方法被证明比以前的工作更快的阶段,我们还利用了第一层的Jacobian(FEJ)来避免重新计算。与以前依靠完全致密残差的工作相反,我们建议仅提供稀疏测量,以更新深度代码,并通过仔细的实验​​表明我们选择稀疏测量和FEJ仍然可以显着改善估计的深度图。我们的完整系统还表现出最新的姿势估计精度,我们表明它可以通过单线程执行实时运行,同时仅利用GPU加速度仅用于网络和代码Jacobian。

In this work, we present a lightweight, tightly-coupled deep depth network and visual-inertial odometry (VIO) system, which can provide accurate state estimates and dense depth maps of the immediate surroundings. Leveraging the proposed lightweight Conditional Variational Autoencoder (CVAE) for depth inference and encoding, we provide the network with previously marginalized sparse features from VIO to increase the accuracy of initial depth prediction and generalization capability. The compact encoded depth maps are then updated jointly with navigation states in a sliding window estimator in order to provide the dense local scene geometry. We additionally propose a novel method to obtain the CVAE's Jacobian which is shown to be more than an order of magnitude faster than previous works, and we additionally leverage First-Estimate Jacobian (FEJ) to avoid recalculation. As opposed to previous works relying on completely dense residuals, we propose to only provide sparse measurements to update the depth code and show through careful experimentation that our choice of sparse measurements and FEJs can still significantly improve the estimated depth maps. Our full system also exhibits state-of-the-art pose estimation accuracy, and we show that it can run in real-time with single-thread execution while utilizing GPU acceleration only for the network and code Jacobian.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源