Since the introduction of neural radiance fields, neural rendering has rapidly developed, displaying significant advantages in novel view synthesis and reconstruction fields. Recently, there has also been substantial exploration in editable scene rendering. Unfortunately, many existing high-quality neural rendering techniques do not support editable scene rendering. Additionally, technologies that enable editable scene rendering face challenges when it comes to learning multiple neural radiance fields to reconstruct each object and the background separately. It becomes difficult to translate high-quality but complex network methods into practice. In this paper, we introduce Neural Voxel Fusion Field (NVFF), a new method for editable scene rendering based on a neural voxel fusion field. Specifically, we observe the shortcomings of the feature extraction in traditional voxel-based editable scene rendering, and propose a more effective feature fusion strategy. This allows us to achieve higher-quality editable scene rendering with only a slight increase in memory overhead. We conducted tests on the ToyDesk dataset and obtained results with PSNR=22.10, SSIM=0.88, and LPIPS=0.22. Compared to other editable scene rendering approaches, our method achieves the same or even higher rendering quality at a faster training speed.
Bangbang YangYinda ZhangYinghao XuYijin LiHan ZhouHujun BaoGuofeng ZhangZhaopeng Cui
Xinhua ChengYanmin WuMengxi JiaQian WangJian Zhang
Verica LazovaVladimir GuzovKyle OlszewskiSergey TulyakovGerard Pons‐Moll
Jonathan GranskogTill N. SchnabelFabrice RousselleJan Novák
Jiajun WuJoshua B. TenenbaumPushmeet Kohli