Recent style transfer methods based on deep networks strive to generate more content matching stylized images by adding semantic guidance in the iterative process. However, these approaches can just guarantee the transfer of integral color and texture distribution between semantically equivalent regions, but local variation within these regions cannot be accurately captured. Therefore, the resulting image lacks local plausibility. To this end, we develop a non-parametric patch based style transfer framework to synthesize more content coherent images. By designing a novel patch matching algorithm which simultaneously takes high-level category information and geometric structure information (e.g., human pose and building structure) into account, our proposed method is capable of transferring more detailed distribution and producing more photorealistic stylized images. We show that our approach achieves remarkable style transfer results on contents with geometric structure, including human body, vehicles, buildings, etc.
ZHANG Ying-tao, ZHANG Jie, ZHANG Rui, ZHANG Wen-qiang
Hyoung-Bum KohSer Wah OhJ.H. JangHeewon Kim
Nhat-Tan BuiNgoc-Thao NguyenXuan-Nam Cao
Itziar ZabaletaMarcelo Bertalmı́o
Itziar ZabaletaMarcelo Bertalmı́o