YAN Mingqiang, YU Pengfei, LI Haiyan, LI Hongsong
The goal of image style transfer is to synthesize an output image by transferring the style of the target image to a given content image.There are a large number of image style transfer works,but the stylization results ignore the manifold distribution of different semantic regions of the content image.At the same time,most methods use global statistics(for example,Gram matrix or covariance matrix) to achieve the matching of style feature to content feature.There are inevitable issues of content loss,style leakage,and the presence of artifacts,resulting in inconsistent stylized results.Aiming at the above problems,a self-attention mechanism-based progressive manifold feature mapping module(MFMM-AM) is proposed to coordinately match features between related content and style manifolds.Exact histogram matching(EHM) is applied to achieve higher-order distribution ma-tching of style and content feature maps,reducing the loss of image information.Finally,two contrastive losses are introduced to learn human beings using the external information of large-scale style datasets perceived style information that makes the color distribution and texture patterns of stylized images more reasonable.Experimental results show that,compared with the existing typical arbitrary image style transfer methods,the proposed network greatly bridges the gap between human-created artworks and AI-created artworks,and can generate visually more harmonious and satisfying artistic images.
Guoshuai LiBin ChengLuoyu ChengChongbin XuXiaomin SunPu RenYong YangQian Chen
Xing LuoJiajia ZhangGuang WuYanxiang Chen
S. Binu StalinJ. E. JudithC. Dhayananth Jegan
Siyu HuangHaoyi XiongTianyang WangBihan WenQingzhong WangZeyu ChenJun HuanDejing Dou
Wuyang LuoSu YangHong WangBo LongWeishan Zhang