论文标题
刚性转换下的强大而可靠的点云识别网络
A Robust and Reliable Point Cloud Recognition Network Under Rigid Transformation
论文作者
论文摘要
点云识别是工业机器人技术和自动驾驶中的重要任务。最近,几个点云处理模型已经实现了最先进的性能。但是,这些方法缺乏旋转鲁棒性,并且它们的性能在随机旋转下严重降解,无法扩展到具有不同方向的现实情况。为此,我们提出了一种名为基于自我轮廓的转换(SCT)的方法,该方法可以灵活地集成到针对任意旋转的各种现有点云识别模型中。 SCT通过引入轮廓感知转换(CAT)提供有效的旋转和翻译不变性,该转换(CAT)线性将点的笛卡尔坐标转换为翻译和旋转不变的表示。我们证明CAT是基于理论分析的旋转和翻译不变的转换。此外,提出了框架对齐模块,以通过捕获轮廓并将基于自我轮廓的框架转换为级内框架来增强判别特征提取。广泛的实验结果表明,SCT在合成和现实世界基准的有效性和效率方面的任意旋转下优于最先进的方法。此外,鲁棒性和一般性评估表明SCT是稳健的,并且适用于各种点云处理模型,这突出了SCT在工业应用中的优势。
Point cloud recognition is an essential task in industrial robotics and autonomous driving. Recently, several point cloud processing models have achieved state-of-the-art performances. However, these methods lack rotation robustness, and their performances degrade severely under random rotations, failing to extend to real-world scenarios with varying orientations. To this end, we propose a method named Self Contour-based Transformation (SCT), which can be flexibly integrated into various existing point cloud recognition models against arbitrary rotations. SCT provides efficient rotation and translation invariance by introducing Contour-Aware Transformation (CAT), which linearly transforms Cartesian coordinates of points to translation and rotation-invariant representations. We prove that CAT is a rotation and translation-invariant transformation based on the theoretical analysis. Furthermore, the Frame Alignment module is proposed to enhance discriminative feature extraction by capturing contours and transforming self contour-based frames into intra-class frames. Extensive experimental results show that SCT outperforms the state-of-the-art approaches under arbitrary rotations in effectiveness and efficiency on synthetic and real-world benchmarks. Furthermore, the robustness and generality evaluations indicate that SCT is robust and is applicable to various point cloud processing models, which highlights the superiority of SCT in industrial applications.