论文标题
3D点云上的遮挡引导场景流量估计
Occlusion Guided Scene Flow Estimation on 3D Point Clouds
论文作者
论文摘要
3D场景流量估计是在深度或范围传感器给定环境中感知我们的环境的重要工具。与光流不同,数据通常很稀疏,在大多数情况下,在两个时间采样之间部分闭塞。在这里,我们提出了一个名为OGSF-NET的新场景流程结构,该架构紧密地构造了框架之间的流程和遮挡的学习。他们的耦合共生导致空间流动流动的更准确预测。与传统的多动作网络不同,我们的统一方法在整个网络中融合在一起,从而增强了遮挡检测和流量估计的性能。我们的体系结构是第一个在点云上衡量3D场景流量估计中闭塞的架构。在诸如Flyththings3D和Kitti之类的关键数据集中,我们实现了最新的结果。
3D scene flow estimation is a vital tool in perceiving our environment given depth or range sensors. Unlike optical flow, the data is usually sparse and in most cases partially occluded in between two temporal samplings. Here we propose a new scene flow architecture called OGSF-Net which tightly couples the learning for both flow and occlusions between frames. Their coupled symbiosis results in a more accurate prediction of flow in space. Unlike a traditional multi-action network, our unified approach is fused throughout the network, boosting performances for both occlusion detection and flow estimation. Our architecture is the first to gauge the occlusion in 3D scene flow estimation on point clouds. In key datasets such as Flyingthings3D and KITTI, we achieve the state-of-the-art results.