Alberto ArgilesJavier CiveraLuis Montesano
Ego-motion estimation and 3D scene reconstruction from image data has been a long term aim both in the Robotics and Computer Vision communities. Nevertheless, while both visual SLAM and Structure from Motion already provide an accurate ego-motion estimation, visual scene estimation does not offer yet such a satisfactory result; being in most cases limited to a sparse set of salient points. In this paper we propose an algorithm to densify a sparse point-based reconstruction into a dense multi-plane based one, from the only input of a set of sparse images.
Alberto ArgilesJavier CiveraLuis Montesano
Pierre DavidMikaël Le PenduChristine Guillemot
Julián QuirogaThomas BroxFrédéric DevernayJames L. Crowley
Clifford RuffStephen HughesDavid J. Hawkes