Techniques for 3D reconstruction of scenes based on images are popular and support a number of secondary applications. Traditional approaches require several captures for covering whole environments due to the narrow field of view (FoV) of the pinhole-based/perspective cameras. This paper summarizes the main contributions of the homonym Ph.D. Thesis, which addresses the 3D scene reconstruction problem by considering omnidirectional (spherical or 360◦ ) cameras that present a 360◦ × 180◦ FoV. Although spherical imagery have the benefit of the full-FoV, they are also challenging due to the inherent distortions involved in the capture and representation of such images, which might compromise the use of many wellestablished algorithms for image processing and computer vision. The referred Ph.D. Thesis introduces novel methodologies for estimating dense depth maps from two or more uncalibrated and temporally unordered 360◦ images. It also presents a framework for inferring depth from a single spherical image. We validate our approaches using both synthetic data and computer-generated imagery, showing competitive results concerning other state-ofthe-art methods.
Christiano GavaDidier StrickerSoichiro Yokota
Thiago L. T. da SilveiraCláudio R. Jung
Jinxing NiuQingsheng HuYi NiuTao ZhangSunil Kumar Jha
Ross WhitakerJens GregorPengkai Chen