Yining WangJinyi ZhangYuxi Jiang
In the field of three-dimensional (3D) reconstruction, neural radiation fields (NeRF) can implicitly represent high-quality 3D scenes. However, traditional neural radiation fields place very high demands on the quality of the input images. When motion blurred images are input, the requirement of NeRF for multi-view consistency cannot be met, which results in a significant degradation in the quality of the 3D reconstruction. To address this problem, we propose KT-NeRF that extends NeRF to motion blur scenes. Based on the principle of motion blur, the method is derived from two-dimensional (2D) motion blurred images to 3D space. Then, Gaussian process regression model is introduced to estimate the motion trajectory of the camera for each motion blurred image, with the aim of learning accurate camera poses at key time stamps during the exposure time. The camera poses at the key time stamps are used as inputs to the NeRF in order to allow the NeRF to learn the blur information embedded in the images. Finally, the parameters of the Gaussian process regression model and the NeRF are jointly optimized to achieve multi-view anti-motion blur. The experiment shows that KT-NeRF achieved a peak signal-to-noise ratio of 29.4 and a structural similarity index of 0.85, an increase of 3.5% and 2.4%, respectively, over existing advanced methods. The learned perceptual image patch similarity was also reduced by 7.1% to 0.13.
Zhenyu YinXiaohui WangFeiqing ZhangXiaoqiang ShiDan Feng
Ze-Xin YinPengyi JiaoJiaxiong QiuMing‐Ming ChengBo Ren
Jiabao LiYuqi LiCiliang SunChong WangJinhui Xiang
Peihao LiShaohui WangYang ChenBingbing LiuWeichao QiuHaoqian Wang
Kazuhito SatoShugo YamaguchiT. TakedaShigeo Morishima