论文标题
具有生成模型的3D网格的细节纹理学习
Fine Detailed Texture Learning for 3D Meshes with Generative Models
论文作者
论文摘要
本文提出了一种从多视图和单视图图像重建高质量纹理3D模型的方法。重建是一个适应问题,在第一阶段,我们专注于学习准确的几何形状,而在第二阶段,我们专注于通过生成的对抗网络学习纹理。在生成学习管道中,我们提出了两个改进。首先,由于应在空间上对齐学习的纹理,因此我们提出了一种依赖像素可学习位置的注意机制。其次,由于鉴别器会收到一致的纹理图,因此我们使用可学习的嵌入来增强其输入,从而改善了发电机的反馈。我们从三脚架数据集以及Pascal 3D+和Cub的单视图像数据集以及单视图数据集以及单观图数据集上实现了重大改进。我们证明,与以前的作品相比,我们的方法实现了优越的3D纹理模型。请访问我们的网页以获取3D视觉效果。
This paper presents a method to reconstruct high-quality textured 3D models from both multi-view and single-view images. The reconstruction is posed as an adaptation problem and is done progressively where in the first stage, we focus on learning accurate geometry, whereas in the second stage, we focus on learning the texture with a generative adversarial network. In the generative learning pipeline, we propose two improvements. First, since the learned textures should be spatially aligned, we propose an attention mechanism that relies on the learnable positions of pixels. Secondly, since discriminator receives aligned texture maps, we augment its input with a learnable embedding which improves the feedback to the generator. We achieve significant improvements on multi-view sequences from Tripod dataset as well as on single-view image datasets, Pascal 3D+ and CUB. We demonstrate that our method achieves superior 3D textured models compared to the previous works. Please visit our web-page for 3D visuals.