论文标题

Robocraft:学习看到,模拟和形成具有图形网络的弹性对象

RoboCraft: Learning to See, Simulate, and Shape Elasto-Plastic Objects with Graph Networks

论文作者

Shi, Haochen, Xu, Huazhe, Huang, Zhiao, Li, Yunzhu, Wu, Jiajun

论文摘要

建模和操纵弹性塑料对象是机器人执行复杂工业和家庭互动任务的重要功能(例如,填充饺子,滚动寿司和制作陶器)。但是,由于弹性物体的高度自由度,在机器人操纵管道的每个方面几乎都存在重大挑战,例如代表状态,建模动力学并综合控制信号。我们建议通过在基于模型的计划框架中使用基于粒子的弹塑性对象来应对这些挑战。我们的系统Robocraft仅假设可以访问RAW RGBD视觉观察。它将传感数据转换为粒子,并使用图形神经网络(GNN)学习基于粒子的动力学模型,以捕获基础系统的结构。然后,学习的模型可以与模型预测性控制(MPC)算法结合起来,以计划机器人的行为。我们通过实验表明,只有10分钟的实际机器人交互数据,我们的机器人可以学习一个动力学模型,该模型可用于合成控制信号以将弹性塑料对象变形为各种目标形状,包括机器人以前从未遇到过的形状。我们在模拟和现实世界中进行系统的评估,以证明机器人的操纵能力和推广到更复杂的动作空间,不同的工具形状和运动模式的混合物的能力。我们还进行了Robocraft和未经训练的人类受试者的比较,以控制抓手,以操纵模拟和现实世界中的可变形物体。我们学到的基于模型的计划框架与经过测试的任务相比,有时比人类受试者更好。

Modeling and manipulating elasto-plastic objects are essential capabilities for robots to perform complex industrial and household interaction tasks (e.g., stuffing dumplings, rolling sushi, and making pottery). However, due to the high degree of freedom of elasto-plastic objects, significant challenges exist in virtually every aspect of the robotic manipulation pipeline, e.g., representing the states, modeling the dynamics, and synthesizing the control signals. We propose to tackle these challenges by employing a particle-based representation for elasto-plastic objects in a model-based planning framework. Our system, RoboCraft, only assumes access to raw RGBD visual observations. It transforms the sensing data into particles and learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system. The learned model can then be coupled with model-predictive control (MPC) algorithms to plan the robot's behavior. We show through experiments that with just 10 minutes of real-world robotic interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various target shapes, including shapes that the robot has never encountered before. We perform systematic evaluations in both simulation and the real world to demonstrate the robot's manipulation capabilities and ability to generalize to a more complex action space, different tool shapes, and a mixture of motion modes. We also conduct comparisons between RoboCraft and untrained human subjects controlling the gripper to manipulate deformable objects in both simulation and the real world. Our learned model-based planning framework is comparable to and sometimes better than human subjects on the tested tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源