Yong WangPengbo ZhouGuohua GengLi AnQi Zhang
In real-world scenarios, due to factors like sensor noise, point cloud data often exhibits low overlap, posing challenges for traditional registration methods. To address this issue, we propose a low-overlap point cloud registration with Transformer. This algorithm employs a dynamic positional encoding strategy that adaptively computes position encodings for each point based on its distribution. This enables better capturing of richer spatial relationships between point clouds and facilitates adaptation to diverse point cloud distributions across various scenes. Furthermore, we combine the mechanisms of self-attention and graph convolutions. The self-attention mechanism captures global dependencies among points, while the graph convolutions capture local neighborhood information between points. Lastly, in the context of cross-attention, adaptive weights are introduced during the attention calculation process. This involves multiplying attention scores by adaptive weights, enhancing the model's ability to focus on crucial registration areas. In scenarios with low overlap, this algorithm significantly enhances the success rate of successful registrations. It achieves notable improvements and attains a new state-of-the-art performance in the 3DLoMatch benchmark test, reaching a registration recall rate of 71.7%.
Li AnPengbo ZhouMingquan ZhouYong WangQi Zhang
Chien Erh LinMinghan ZhuMaani Ghaffari
KONG Yu, XIONG Fengguang, ZHANG Zhiqiang, SHEN Chaofan, HU Mingyue
Tianming ZhaoLinfeng LiTian TianJiayi MaJinwen Tian
Zhi-Huang LinChun-Yang ZhangXiangyang LinHuibin LinGuang ZengC. L. Philip Chen