论文标题
3D语义分段的虚拟多视图融合
Virtual Multi-view Fusion for 3D Semantic Segmentation
论文作者
论文摘要
3D网格的语义分割是3D场景理解的重要问题。在本文中,我们重新审视了3D网格的经典多视图表示,并研究了几种技术,使它们有效,可有效。给定从RGBD传感器重建的3D网格,我们的方法有效地选择了3D网格的不同虚拟视图,并渲染多个2D通道来训练有效的2D语义分割模型。最终,来自每个视图预测的功能最终在3D网格顶点融合,以预测网格语义分割标签。使用扫描仪的大规模室内3D语义分段基准,我们表明我们的虚拟视图比以前的多视图方法更有效地对2D语义分割网络进行了更有效的培训。当每个像素预测的2D在3D表面上汇总时,与所有先前的多视图方法相比,我们的虚拟多视图融合方法能够实现明显更好的3D语义分割结果,并具有最新的3D卷积方法。
Semantic segmentation of 3D meshes is an important problem for 3D scene understanding. In this paper we revisit the classic multiview representation of 3D meshes and study several techniques that make them effective for 3D semantic segmentation of meshes. Given a 3D mesh reconstructed from RGBD sensors, our method effectively chooses different virtual views of the 3D mesh and renders multiple 2D channels for training an effective 2D semantic segmentation model. Features from multiple per view predictions are finally fused on 3D mesh vertices to predict mesh semantic segmentation labels. Using the large scale indoor 3D semantic segmentation benchmark of ScanNet, we show that our virtual views enable more effective training of 2D semantic segmentation networks than previous multiview approaches. When the 2D per pixel predictions are aggregated on 3D surfaces, our virtual multiview fusion method is able to achieve significantly better 3D semantic segmentation results compared to all prior multiview approaches and competitive with recent 3D convolution approaches.