Drawgan: Text To Image Synthesis With Drawing Generative Adversarial Networks
Zhiqiang Zhang, Jinjia Zhou, Wenxin Yu, Ning Jiang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:51
In this paper, we propose a novel drawing generative adversarial networks (DrawGAN) for text-to-image synthesis. The whole model divides the image synthesis into three stages by imitating the process of drawing. The first stage synthesizes the simple contour image based on the text description, the second stage generates the foreground image with detailed information, and the third stage synthesizes the final result. Through the step by step synthesis process from simple to complex and easy to difficult, the model can draw the corresponding results step by step and finally achieve the higher-quality image synthesis effect. Our method is validated on the Caltech-UCSD Birds 200 (CUB) dataset and the Microsoft Common Objects in Context (MS COCO) dataset. The experimental results demonstrate the effectiveness and superiority of our method. In terms of both subjective and objective evaluation, our method's results surpass the existing state-of-the-art methods.
Chairs:
Marta Mrak