Seong-Oh LeeHwasup LimHyoung-Gon KimSang Chul Ahn
We present RGB-D Fusion, a framework which robustly tracks and reconstructs dense textured surfaces of scenes and objects by integrating both color and depth images streamed from a RGB-D sensor into a global colored volume in real-time. To handle failure of the ICP-based tracking approach, KinectFusion, due to the lack of sufficient geometric information, we propose a novel approach which registers the input RGB-D image with the colored volume by pho- tometric tracking and geometric alignment. We demonstrate the strengths of the proposed approach compared with the ICP-based approach and show superior performance of our algorithm with real-world data. I. INTRODUCTION Real-time photo-realistic 3D reconstruction of real world scenes and objects plays an important role for a variety of applications such as robotics, augmented reality, human computer interaction, and entertainment. One of the main requirements in the robot interaction with the environment is the 3D reconstruction and representation of real world environments. With the invention of the low-cost Microsoft Kinect sensor as an representative of a consumer-grade range sensing device, the high-resolution range images aligned with pho- tometric information become relatively easily available. The Kinect sensor opens up new possibilities to build a low- cost 3D modeling system with well-studied range mapping technologies.
Timothy J. WhelanMichael KaessHordur JohannssonMaurice FallonJohn J. LeonardJohn McDonald
Thomas J. WhelanHordur JohannssonMichael KaessJohn J. LeonardJohn McDonald
Ning AnXiaoguang ZhaoZeng‐Guang Hou
Yong WangXian WeiHao ShenLu DingJiuqing Wan