Abstract

We address outdoor Neural Radiance Fields (NeRF) [23] with real-world camera views and LiDAR maps. Existing methods usually require densely-sampled source views and do not perform well with the open source camera-LiDAR datasets. In this paper, our design leverages 1) LiDAR sensors for strong 3D geometry priors that significantly improve the ray sampling locality, and 2) Conditional Adversarial Networks (cGANs) [15] to recover image details since aggregating embeddings from imperfect LiDAR maps causes artifacts. Our experiments show that while NeRF baselines produce either noisy or blurry results on Argoverse 2 [42], our system not only outperforms baselines in image quality metrics under both clean and noisy conditions, but also obtains closer Detectron2 [43] results to the ground truth images. Furthermore, this system can be used in data augmentation for training a pose regression network [3] and multi-season view synthesis. We hope this work to serve as a new LiDAR-based NeRF baseline that pushes this research direction forward (released here).

Keywords:
Lidar Radiance Computer science Artificial intelligence Ground truth Computer vision Locality Artificial neural network Remote sensing Prior probability Image (mathematics) Geography Bayesian probability

Metrics

10
Cited By
1.82
FWCI (Field Weighted Citation Impact)
45
Refs
0.83
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.