论文标题
犯罪:单个图像或视频上的训练扩散模型
SinFusion: Training Diffusion Models on a Single Image or Video
论文作者
论文摘要
扩散模型在图像和视频产生方面表现出巨大的进步,超过了质量和多样性的剂量。但是,它们通常在非常大的数据集上接受培训,并且自然而然地适应给定的输入图像或视频。在本文中,我们展示了如何通过在单个输入图像或视频上训练扩散模型来解决此问题。我们的图像/视频特异性扩散模型(SINFUSION)在利用扩散模型的调理功能的同时了解了单个图像或视频的外观和动力学。它可以解决各种各样的图像/视频特定操作任务。特别是,我们的模型可以从几帧中学习单个输入视频的运动和动力学。然后,它可以生成相同动态场景的各种新视频示例,将短视频推断为长视频(及时向后和向后)并执行视频上升采样。这些任务中的大多数无法通过当前特定于视频的生成方法来实现。
Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. Most of these tasks are not realizable by current video-specific generation methods.