Semantic image manipulation (SIM) aims to generate realistic images from an input source image and a target text description, such that the generated images not only match the content of the description, but also maintain text-irrelevant features of the source image. It requires to learn a good mapping between visual features and linguistic features. Previous works on SIM can only generate images of limited resolution that typically lack of fine and clear details. In this work, we aim to generate high-resolution photo-realistic images for SIM. Specifically, we propose SIMGAN, a generative adversarial networks (GAN) based architecture that is capable of generating images of size 256 × 256 for SIM. We demonstrate the effectiveness of SIMGAN and its superiority over existing methods via qualitative and quantitative evaluation on Caltech-200 and Oxford-102 datasets.
Junling LiuYuexian ZouDongming Yang
Raghavendra Shetty Mandara KirimanjeshwaraS N Prasad
Zhe HeAdrian SpurrXucong ZhangOtmar Hilliges
Shengdong ZhangFazhi HeWenqi Ren