Yue WuQianlin YaoXiaolong FanMaoguo GongWenping MaQiguang Miao
Point cloud registration is a critical task in many 3D computer vision studies, aiming to find a rigid transformation that aligns one point cloud with another. In this paper, we propose PANet-a Point-Attention based multi-scale feature fusion network for partially overlapping point cloud registration. This study aims to investigate whether multi-scale features are more effective in improving the precision of alignment compared to fixed-scale local features. PANet comprises two core components: a multi-branch feature extraction module that extracts local features at different scales in parallel, and a Point-Attention Module that learns an appropriate weight for each branch and then fuse these multi-scale features by weighted combination to enhance the representation ability of features. At the end of the network, four hidden layers are used to obtain the rigid transformation from the source point cloud to the template point cloud. Experiments on the synthetic ModelNet40 dataset demonstrate that PANet outperforms state-of-the-art performance in terms of both alignment precision and robustness against noise. PANet also exhibits strong generalization ability on real-world Stanford 3D and ICL-NUIM datasets. In addition, the computational complexity of our model compared to previous works is also evaluated. The results and ablation studies demonstrate that multi-scale fused local features are better at improving registration accuracy than fixed-scale local features. The findings may inspire future research in related fields and contribute to the development of new ideas and approaches.
Zesheng YuNan JiangZefeng ZouZiyi LiJing Zhang
Hao DengCheng PengS. ChengCheng LiuShaoyi DuLin Wang
Jing DuZuning JiangShangfeng HuangZongyue WangJinhe SuSongjian SuYundong WuGuorong Cai
Yikang JiangHao TianQuanbing Zhang