Sören DischerLeon MasopustSebastian SchulzRico RichterJürgen Döllner
Real-time rendering for 3D point clouds allows for interactively exploring and inspecting real-world assets, sites, or\nregions on a broad range of devices but has to cope with their vastly different computing capabilities. Virtual reality\n(VR) applications rely on high frame rates (i.e., around 90 fps as opposed to 30 - 60 fps) and show high sensitivity\nto any kind of visual artifacts, which are typical for 3D point cloud depictions (e.g., holey surfaces or visual clutter\ndue to inappropriate point sizes). We present a novel rendering system that allows for an immersive, nausea-free\nexploration of arbitrary large 3D point clouds on state-of-the-art VR devices such as HTC Vive and Oculus Rift.\nOur approach applies several point-based and image-based rendering techniques that are combined using a multipass\nrendering pipeline. The approach does not require to derive generalized, mesh-based representations in a preprocessing\nstep and preserves precision and density of the raw 3D point cloud data. The presented techniques have\nbeen implemented and evaluated with massive real-world data sets from aerial, mobile, and terrestrial acquisition\ncampaigns containing up to 2.6 billion points to show the practicability and scalability of our approach.
Hyungwoo KangSeonyoung JangByung-Tae Oh
Wallace LukeHillman SamuelReinke KarinHally Bryan
Renan Machado e SilvaCláudio EsperançaAntónio Oliveira
Alvaro Casado-CoscollaCarlos Sánchez-BelenguerErik WolfartV. Sequeira