论文标题
用于参数虚拟尝试的完全卷积图神经网络
Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On
论文作者
论文摘要
我们为基于完全卷积图神经网络的虚拟尝试应用程序提供了一种基于学习的方法。与经过针对特定服装或网格拓扑的训练的现有数据驱动的模型相反,我们的完全卷积模型可以应对大型服装家族,这些服装为参数预定义的2D面板,带有任意网格拓扑,包括长连衣裙,衬衫,衬衫和紧身的上衣。在引擎盖下,我们新颖的几何深度学习方法通过解耦来使三种不同的变形源来覆盖3D服装,以调节衣服的贴合:服装类型,目标身体形状和材料。具体而言,我们首先学习了一个回归器,该回归器可以预测输入参数服装的3D悬垂器,而当平均身体形状佩戴时。然后,在一个网格拓扑优化步骤之后,我们为输入服装类型生成足够的细节,我们进一步变形了网格以重现由目标体形引起的变形。最后,我们预测细节的细节,例如主要取决于服装材料的皱纹。我们在定性和定量上证明,我们的完全卷积方法在概括能力和内存需求方面优于现有方法,因此,它为虚拟试验应用程序打开了更一般的基于学习的模型。
We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.