Ran YangLiu YangRuofei ZhongZhanying WeiMengbing XuShuai LiuPeng Yan
At present, the widely used traditional three-dimensional (3D) reconstruction techniques are still insufficient to adapt to various diverse scenarios. Compared to traditional methods, emerging technologies like Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) for novel view synthesis offer more realistic and comprehensive expression capabilities. However, most of these related technologies still rely on traditional methods and require extensive and dense input views, which poses challenges to the reconstruction in the real-world scenarios. We propose MFGaussian, a framework based on 3DGS for 3D scene representation by fusing multi-modal data obtained from the Mobile laser scanning system (MLS) to achieve high robustness and accuracy even with limited input views. MFGaussian employs stepwise training approach to independently learn the global information and details of the scene. During pre-training, a substantial number of virtual training views are generated by projecting color point clouds, thereby enhancing the model's robustness. Subsequently, the model is fine-tuned using the original training views. This method initializes the laser point cloud as 3D Gaussian, obtains camera parameters through multi-sensor calibration and subsequent spherical interpolation, thus obtaining high-precision initial data without relying on Structure from Motion(SfM), and further ensures accurate geometric structure through the partial optimization. Furthermore, an analysis has been conducted on how variations in lighting brightness within the scene affect the view synthesis from diverse perspectives and positions, with an appearance model incorporated to eliminate the resulting color ambiguity. Our method, tested on our dataset and the ETH3D stereo benchmark, demonstrates enhanced capability and robustness of 3DGS in diverse scenarios without SfM or dense view inputs. It outperforms several state-of-the-art methods in both quantitative and qualitative evaluations. Our code will be open sourced soon later after the publication of this manuscript (https://github.com/oucliuyang/MFGaussian).
Xiangrui LiuXinju WuPingping ZhangShiqi WangZhu LiSam Kwong
Jinsong ZhangI‐Chao ShenJotaro SakamiyaYu‐Kun LaiTakeo IgarashiKun Li
Andreas G. PapandreouAndreas KloukiniotisAris S. LalosΚωνσταντίνος Μουστάκας
Saptarshi Neil SinhaHolger GräfMichael Weinmann