With the rapid advancement of deep learning in computer vision, accurately performing classification and segmentation tasks on 3D point clouds has become increasingly important. Despite significant progress, challenges persist due to the uneven density, noise, sparsity, shape complexity, and disorder inherent in 3D point cloud data. The accuracy of these tasks is closely tied to the quality of the features extracted by the network. This study presents SO-PointNet++, an enhanced PointNet++ architecture that integrates Shuffle Attention and Offset-Attention mechanisms to improve local and global feature extraction capabilities. Additionally, a residual connection module is integrated into the PointNet++ architecture. This module reduces the effect of the issue of gradient vanishing. We demonstrate SO-PointNet++’s efficacy through experiments on the ModelNet40 and ShapeNet datasets, achieving an overall accuracy of 92.7% and a mean Intersection over Union (mIoU) of 85.6%, respectively. These results outperform existing methods, highlighting SO-PointNet++’s potential to advance point cloud classification and segmentation.
Mengbin RaoSen YuanPing TangJianjun Ge
Zhan JinFan XuZhongyuan LuJin Liu
Takuma UeshimaKatsuya HottaShogo TokaiChao Zhang
Xiaofeng LuZhiwei GuanDangfeng PangRupeng DouXiaolong Zheng