JOURNAL ARTICLE

Weakly Supervised Point Cloud Semantic Segmentation Based on Multidimensional Feature Fusion and Feature Representation

Abstract

In recent years, with the popularity of LIDAR, depth cameras and other devices and the development of intelligent robots, high-precision maps, smart cities and other fields, the demand for outdoor scene understanding and environment perception at large scales is also getting higher and higher, and 3D point cloud semantic segmentation technology is precisely one of the research focuses. However, the current 3D semantic segmentation system mainly relies on the use of fully labelled 3D scenes for training. It is time-consuming and costly to fully annotate 10 million or even hundreds of millions of point clouds. Inspired by the weakly supervised semantic segmentation of 2D scenes, state-of-the-art research has also started to use a small number of labels to segment 3D scenes first. However, the outdoor point cloud data in it is large in scale and covers a wide area, which is a great challenge for neural networks to understand the spatial structure at large scales; secondly, the information brought by sparse labels is very important, and the model needs to improve the utilisation of sparse signals.

Keywords:
Feature (linguistics) Computer science Point cloud Representation (politics) Segmentation Artificial intelligence Pattern recognition (psychology) Fusion Point (geometry) Cloud computing Semantic feature Mathematics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
22
Refs
0.19
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Remote Sensing and LiDAR Applications
Physical Sciences →  Environmental Science →  Environmental Engineering
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
Image Processing and 3D Reconstruction
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.