Image-to-point cloud registration is the task of fusing 2D images with 3D point clouds, typically combining the rich texture information from images with the spatial information from point clouds for a more comprehensive scene representation. However, the feature discrepancy between different modalities often makes it challenging to accurately match. Addressing this issue, this paper proposes an image-matching-based algorithm for image-to-point cloud registration. Initially, for cross-modal issues, we use 3D visualization tools to screenshot the 3D models, reading the camera information of the screenshots to establish a correspondence between the 3D models and 2D images. By employing a pretrained image matching model, the algorithm extracts keypoints and matches images of the screenshots and images, establishing image keypoints pairing relationships. Then, using the camera projection model, it maps the 3D coordinates to image keypoints, obtaining image-to-point cloud keypoint pairing relationships and calculating the transformation matrix between the image and 3D point cloud using PnP-RANSAC. The proposed method, compared with the manual annotations method, proves to significantly improve efficiency and stability in real-world scenarios.
S. KyparissiAndreas Georgopoulos
Haolin TangAnning PanYang YangKun YangYi LuoSu ZhangSim Heng Ong
Ke WangTielin ShiGuanglan LiaoQi Xia
Gang WangZhicheng WangYufei ChenWeidong Zhao