Liang YuanDingkun YanSuguru SaitoIssei Fujishiro
Creating realistic materials is essential in the construction of immersive virtual environments. While existing techniques for material capture and conditional generation rely on flash-lit photos, they often produce artifacts when the illumination mismatches the training data. In this study, we introduce DiffMat, a novel diffusion model that integrates the CLIP image encoder and a multi-layer, cross-attention denoising backbone to generate latent materials from images under various illuminations. Using a pre-trained StyleGAN-based material generator, our method converts these latent materials into high-resolution SVBRDF textures, a process that enables a seamless fit into the standard physically based rendering pipeline, reducing the requirements for vast computational resources and expansive datasets. DiffMat surpasses existing generative methods in terms of material quality and variety, and shows adaptability to a broader spectrum of lighting conditions in reference images.
Hang ZhangWei CuiYuzhu CaoTao TanJie LiuYunsong PengJian Zheng
Wenbo SongYan JiangYin FangXinyu CaoPeiyan WuHanshuo XingXinglong Wu
Jaskirat SinghStephen Jay GouldLiang Zheng
Huanyu YangMengchu TianJun WangJun WangYuming BoJiacun WangJiacun WangHenry HanPeng ZhuGiancarlo Fortino