Universal image style transfer requires not only maintaining the semantic content but also transferring arbitrary visual styles. Recent progress has been made through processing an image as a whole, but without considering fine-grained styles of different semantic regions in the image. In this paper, we propose a Fine-Grained Style Transfer (FGST) model, which renders different content image regions into different fine-grained styles, thus improving the comprehensibility and visual effect of the stylized image. Specifically, we segment the input images into different semantic regions first, and then select the style and content image with the same semantic regions for training to preserve the fine-grained style consistency. In addition, we design a new style loss function to evaluate style consistency between the output stylized image and the input style image. Compared with the state-of-the-art models, experiments show that our model obtains better visual effects.
Jianbo WangHuan YangJianlong FuToshihiko YamasakiBaining Guo
Jinggui LiangDũng Trung VõYing XianHai Leong ChieuKian Ming A. ChaiJing JiangLizi Liao
Miaomiao WuLi LiuXiaodong FuLijun LiuQingsong Huang