论文标题
基于注意的复发神经网络的文本到图像生成
Text-to-Image Generation with Attention Based Recurrent Neural Networks
论文作者
论文摘要
基于文本描述的条件图像建模是无监督学习中相对较新的领域。先前的方法使用潜在变量模型和生成对抗网络。虽然通过使用变分的自动编码器来近似地板,并依靠可妨碍其性能的棘手的推理,但由于基于NASH平衡的目标函数,后者是不稳定的。我们开发了一个基于标题的可拖动和稳定的图像生成模型。该模型使用基于注意力的编码器来学习单词对像素依赖性。有条件的基于自回旋的解码器用于学习像素到像素依赖性和生成图像。在Microsoft Coco上进行实验,并使用结构相似性指数评估MNIST-with-with-with-with-with-with-with-with-with-with-with-captions数据集和性能。结果表明,所提出的模型的性能比当代方法更好,并产生更好的图像。关键字:生成图像建模,自回归图像建模,基于字幕的图像生成,神经关注,复发性神经网络。
Conditional image modeling based on textual descriptions is a relatively new domain in unsupervised learning. Previous approaches use a latent variable model and generative adversarial networks. While the formers are approximated by using variational auto-encoders and rely on the intractable inference that can hamper their performance, the latter is unstable to train due to Nash equilibrium based objective function. We develop a tractable and stable caption-based image generation model. The model uses an attention-based encoder to learn word-to-pixel dependencies. A conditional autoregressive based decoder is used for learning pixel-to-pixel dependencies and generating images. Experimentations are performed on Microsoft COCO, and MNIST-with-captions datasets and performance is evaluated by using the Structural Similarity Index. Results show that the proposed model performs better than contemporary approaches and generate better quality images. Keywords: Generative image modeling, autoregressive image modeling, caption-based image generation, neural attention, recurrent neural networks.