论文标题
轻量级多视图3D姿势通过摄像机 - 符号表示形式估算
Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled Representation
论文作者
论文摘要
我们提出了一种轻巧的解决方案,可从用空间校准的摄像机捕获的多视图图像中恢复3D姿势。基于最新的可解释表示学习的进步,我们利用3D几何形状将输入图像融合为统一的姿势潜在表示,该姿势与摄像机的视点分开。这使我们能够在不使用计算密集型体积网格的情况下有效地推理大约3D姿势。然后,我们的体系结构在相机投影操作员上的学习表示,以产生准确的每视图2D检测,可以简单地通过可区分的直接线性变换(DLT)层提升至3D。为了有效地做到这一点,我们提出了一种新颖的DLT实现,该实现比基于标准SVD的三角剖分方法更快地是GPU体系结构的数量级。我们在两个大尺度人姿势数据集(H36M和Total Capture)上评估了我们的方法:我们的方法表现优于最先进的体积方法,而与最新的体积方法相当,而与它们不同,它们产生了实时性能。
We present a lightweight solution to recover 3D pose from multi-view images captured with spatially calibrated cameras. Building upon recent advances in interpretable representation learning, we exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points. This allows us to reason effectively about 3D pose across different views without using compute-intensive volumetric grids. Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections, that can be simply lifted to 3D via a differentiable Direct Linear Transform (DLT) layer. In order to do it efficiently, we propose a novel implementation of DLT that is orders of magnitude faster on GPU architectures than standard SVD-based triangulation methods. We evaluate our approach on two large-scale human pose datasets (H36M and Total Capture): our method outperforms or performs comparably to the state-of-the-art volumetric methods, while, unlike them, yielding real-time performance.