JOURNAL ARTICLE

Multi-Neighborhood Sparse Feature Selection for Semantic Segmentation of LiDAR Point Clouds

Rui ZhangGuan‐Long HuangForrest Sheng BaoXin Guo

Year: 2025 Journal:   Remote Sensing Vol: 17 (13)Pages: 2288-2288   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

LiDAR point clouds, as direct carriers of 3D spatial information, comprehensively record the geometric features and spatial topological relationships of object surfaces, providing intelligent systems with rich 3D scene representation capability. However, current point cloud semantic segmentation methods primarily extract features through operations such as convolution and pooling, yet fail to adequately consider sparse features that significantly influence the final results of point cloud-based scene perception, resulting in insufficient feature representation capability. To address these problems, a sparse feature dynamic graph convolutional neural network, abbreviated as SFDGNet, is constructed in this paper for LiDAR point clouds of complex scenes. In the context of this paper, sparse features refer to feature representations in which only a small number of activation units or channels exhibit significant responses during the forward pass of the model. First, a sparse feature regularization method was used to motivate the network model to learn the sparsified feature weight matrix. Next, a split edge convolution module, abbreviated as SEConv, was designed to extract the local features of the point cloud from multiple neighborhoods by dividing the input feature channels, and to effectively learn sparse features to avoid feature redundancy. Finally, a multi-neighborhood feature fusion strategy was developed that combines the attention mechanism to fuse the local features of different neighborhoods and obtain global features with fine-grained information. Taking S3DIS and ScanNet v2 datasets, we evaluated the feasibility and effectiveness of SFDGNet by comparing it with six typical semantic segmentation models. Compared with the benchmark model DGCNN, SFDGNet improved overall accuracy (OA), mean accuracy (mAcc), mean intersection over union (mIoU), and sparsity by 1.8%, 3.7%, 3.5%, and 85.5% on the S3DIS dataset, respectively. The mIoU on the ScanNet v2 validation set, mIoU on the test set, and sparsity were improved by 3.2%, 7.0%, and 54.5%, respectively.

Keywords:
Point cloud Lidar Computer science Segmentation Feature (linguistics) Artificial intelligence Feature selection Pattern recognition (psychology) Point (geometry) Computer vision Remote sensing Geology Mathematics Geometry

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
62
Refs
0.21
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Remote Sensing and LiDAR Applications
Physical Sciences →  Environmental Science →  Environmental Engineering
Image Processing and 3D Reconstruction
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
© 2026 ScienceGate Book Chapters — All rights reserved.