Lu YuJu LiuXueyin ZhaoXiaoxi LiuWeiqiang ChenXuesong Gao
In this paper, we present a novel approach for image translation based on Generative Adversarial Networks (GANs). We apply the self-attention mechanism to improve the quality of the generated images as well as focus on not only local feature representation but also global structural correlation. Meanwhile, we adopt the idea of cyclic image translation in CycleGAN. Moreover, in order to stabilize the training process and reduce the probability of appearing unusual gradients, two techniques spectral normalization and TTUR applied during training process. In the experimental section, we compare our method with the original CycleGAN in terms of subjective evaluation, objective scoring, classification verification and computational cost. Both subjective and objective results show that our model can generate high quality and abundant diversity images that uses unpaired unlabeled samples. Two techniques used during the training have prompt the convergence speed of the network and help the model use less time to get the optimal point. Lots of comparisons against CycleGAN demonstrate that our proposed method is superior to the original one.
Wenqing ZhaoJianlin ZhuPing LiJin HuangJunwei Tang
Hao TangDan XuNicu SebeYan Yan
Yuusuke KataokaTakashi MatsubaraKuniaki Uehara
Xiangdan HouJinlin SongHongpu Liu
Yongli MaJindong XuFei JiaWeiqing YanZhaowei LiuMengying Ni